I'm having trouble getting Connect up and running with Docker Compose. I believe my problems is somewhere related to a user and/or its rights. I am running Docker on my Synology NAS where I created a specific 'Shared Folder' named 1password. I also created an user (with has UID 1042) for further trial/error testing.
I currently have the following compose file:
version: "3.4" services: 1password-connect-api: image: 1password/connect-api:latest #user: "1042" ports: - "8888:8080" volumes: - /volume1/1password/1password-credentials.json:/home/opuser/.op/1password-credentials.json:ro - /volume1/1password/data:/home/opuser/.op/data restart: unless-stopped 1password-connect-sync: image: 1password/connect-sync:latest #user: "1042" ports: - "8881:8080" volumes: - /volume1/1password/1password-credentials.json:/home/opuser/.op/1password-credentials.json:ro - /volume1/1password/data:/home/opuser/.op/data restart: unless-stopped
This gives me the following error from both containers:
unspecified err: stat /home/opuser/.op/data/1password.sqlite: permission denied
This led me to configure a '1password' user (UID 1042) and tell the containers to run as that user using:
user: 1042
This does get me through the permission denied problems (which makes me assume the 'user' does indeed work and 'do something'). But now it doesn't seem to be able and/or create the database:
1password-connect-api_1 | {"log_message":"(I) no database found, will retry in 1s","timestamp":"2022-01-14T13:13:28.600675519Z","level":3}
1password-connect-sync_1 | {"log_message":"(I) no existing database found, will initialize at /.op/data/1password.sqlite","timestamp":"2022-01-14T13:13:27.719489908Z","level":3}
1password-connect-sync_1 | Error: Server: (failed to OpenDefault), Wrapped: (failed to open db), unable to open database file: no such file or directory
I also tried to change the permissions of the Shared Folder and it's subfolder in order for 'Everyone' to have read/write, but then I get an error like 'Permissions too broad' (which seems quite fair and indeed very unwanted)
I am currently out of ideas on how to get this running. Anyone who can point me in the right direction?
1Password Version: 7.9.2
Extension Version: Not Provided
OS Version: macOS 12.1
Comments
I am actually one step further. I tried to use the example Docker Compose file (https://github.com/1Password/connect/blob/main/examples/docker/compose/docker-compose.yaml) instead of my modified one. I guess I don't fully understand the usage of volumes yet. I got the containers running.
However, I had to give 'Everyone' permission to the folder, thus including access to the credantials-json. Not something I believe I want to continue using.
When using the 'user: "1042"' config, the containers fail to create a database:
What is the proper way to run the containers from within a limited-access folder?
Team Member
Hi @miura,
Thank you for reaching out. Sorry for my late reply, but I am glad you made some progress. What you are describing is indeed a less-than-ideal situation with docker-compose. I am not sure if this is something that we can address on a Synology NAS, but let's give it a try.
The
no database found
log line you're seeing, is not necessarily an error. It indicates that the API container is waiting for the sync container to come online and create the shared database. Could you maybe share the logs of the sync container?Just to be sure: I removed 'Everyone' from the Permission-list of the Folder. My Docker-Compose file with the specified user-id:
User '1password' having Read/Write access to my '1password' Shard Folder and (sub)files containing the docker-compose.yml and credentials file, including the UID of this user

Logging from api-container
Logging from sync-container
Team Member
Thanks for all the extra info, that is really helpful!
What I think that happens, is that the
data
volume (the second volume) has been created but is not accessible to the correct user. There are a few things we can try:docker-compose
commands, that can be done withdocker-compose down -v
. Then try bringing it up again.Let me know if this changes anything.
Logging from the api-container
Logging from the sync-container
Logging from the api-container
Logging from the sync-container
After:
1. properly removing the volume with docker-compose down -v (how do you use inline-quoting :) ?) and recreating the containers and 2. trying the different mounting path for the data volume (both show the same behaviour)
api-container
sync-container
Team Member
It seems like something is going wrong now with determining the home directory. Connect says it is looking in
/.op/data/1password.sqlite
instead of/home/opuser/.op/data/1password.sqlite
.What we can give a try, is manually specifying the correct location by setting
XDG_DATA_HOME
to/home/opuser/
. You can do that by adding the following line to the specification of both containers (just abovevolumes:
):PS. you can use inline codeblocks by wrapping the code in a single tilde, so:
Team Member
Hmm, that's a bummer.
Some things you could try:
Or if the Synology interface allows this, you could try making the
1password
user the owner of that directory (I am not familiar with Synology enough to tell if this is possible).I am digging a bit deeper into my problems and I am wondering whether this Image Layer could be (partially) the cause of my issue:
The folders are very specifically created by user opuser. No matter how I mount my volumes (
- "data:..."
or- "./data:..."
) I will keep having mismatched between the docker user and the host/folder. Do you agree with my findings? If so, what could we do to work around it :)One of the problems is, when using the
- "data:.."
volume mount, the containers exit before I can exec into them to dochown
:This gives me no time to run any command unfortunately.
Team Member
I think you're right here. Though that also suggests an alternative solution: what if you replace both volume mounts with this one:
./:/home/opuser/.op
(make sure the1password-credentials.json
file is in./
). With a bit of luck, that works because the/home/opuser/.op
is then owned by user 1042.Alternatively, it is possible to execute a command during startup by modifying the entrypoint of one of the containers:
or
One final thing worth checking: is it possible to choose which ID gets assigned to the user you create in the Synology software? If so, could you create one with ID 999?
Using:
entrypoint: ["/bin/sh", "-c", "chown -R 1042 /home/opuser/.op && connect-api"]
Gives:
Using
entrypoint: ["/bin/sh", "-c", "sudo chown -R 1042 /home/opuser/.op && connect-api"]
Gives:
Just to be sure, is this what you meant with your first suggestion?:
Team Member
Yes. Assuming that
./
is owned by user1042
in the Synology interface.Terminal screenshot from the host:

Terminal screenshot from within the container:

Showing that the folder are indeed owned by 1042, Unfortunately, still:
and
I did have some other findings though:
I recreated the host folder from scratch, leaving root as the owner. I give usergroup 'SYSTEM' (id 1 I believe) access and left the
user: "1042"
out of the docker-compose, basically reverting back to the example provided from 1Password. This actually does work! ... ? What I am not 100% of, is whether this is any safe ...Team Member
That's interesting and good to hear! I am inclined to say that that should be okay. Your main concern should be whether other users can access the directory (especially the credentials file) when accessing your NAS. I know too little about Synology or the exact setup to give a definitive answer, but but my feeling is that giving
SYSTEM
access should not be a problem. In fact, I'd expect that user to always have had access.For what it is still worth, the most recent logs seem to point at the same problem as here.
Just for the sake of testing, I tested with this compose-file:
In combination with
sudo chown -R 1042 1password
to make sure the user1042
is indeed the owner of the main folder and all of it's children. This results in the following:I guess 1password still has some hard-coded user-config .. :) ?
Team Member
That's also what I thought, but I checked the code and that is not the case. The check uses the ID of the user that is running the process. So that should be
1042
.One final idea, is that the root of the mountpoint gets treated differently by Docker. If that is the case, changing the mount to
- "./:/home/opuser/"
might work. That is something you could try for educational purposes, as you've already gotten it to work :)I tried to play around a bit with your last suggestion. Also, instead of using the 'Shared Folder' as the root-folder I created a folder in the Shared Folder and run from this one. Perhaps Synology/Docker treats a Shared Folder a bit different as well.
Unfortunately all end up with a:
I guess my (our ;) ) 'quest' ends here as I do have a working work-around/solution. Thanks a lot for your awesome support!
Team Member
Thank you so much for thinking along and giving a lot of things a try!