I agree that SQL Server express would probably be a good solution to this issue.
Another solution, (that is probably just as much work,) would be to use disconnected recordsets.
Set the cursorlocation to client, static cursor type, and batch update lock type.
Open your recordset, then, set the recordset's active connection to nothing.
The data remains in the recordset for your use, but you can close the connection altogether. It's not needed again until it's time to update any changes made locally, back to the database.
thread, and specifically, the book mentioned there.
The advantage that a database server has, is that IT controls access to the underlying data, and the underlying data will always
be consistent. With access, as you've seen, if the file access/update process is interrupted, your database can be corrupted. But, by limiting the exposure of the open connection (as per above,) you can limit the opportunity for corruption as well. You can't eliminate it, like you could with a server, so you'll still need to be prepared for repairing corrupted databases, but you can
reduce the number of incidents.
Meanwhile, if you haven't been keeping track of the downtime associated with corrupt data files, you should. As well as track the time you've spent in fixing the files. This can give you a baseline for the true cost of continuing to use the Access data files, and justification for moving forward to a database server-based approach
As Gruff mentioned, "Free" is a good way get started with a database server. In addition to SQL Server express, Oracle, and IBM (DB2) offer limited (normally database size) server-based solutions for free. In addition, PostgreSQL
is a very powerful, open source enterprise database server that is 100 percent free of charge, and can handle terrabyte-scale database applications. MySQL can as well, but, it isn't as safe with your data as PostgreSQL is, and licensing costs in commercial applications can be an issue.