Deploying to Azure Container Instances

lonevvolf
lonevvolf
Community Member
edited May 2022 in Secrets Automation

Has anyone used this with Azure Container Instances? I am trying to setup the Connect Server, but I can't get the two containers to share a single file share - it seems like they are locking against each other.


1Password Version: Not Provided
Extension Version: Not Provided
OS Version: Not Provided

Comments

  • sshipway
    sshipway
    Community Member

    Yes - I've been able to create an AZDO pipeline with container manifests to deploy both the Connect and SCIM, and test them after deployment. This takes the various config items from the Library in AZDO. Our exact config would be unlikely to work for you, as we use metadata services to provision the DNS and certificate over the top of it, but I can let you have some templates to work with if you want.
    My biggest sticking point was getting the correct network policy defined, and using the correct version of container for scim. I also completely failed to make persistent volume claims work but that doesn't seem to be a problem as I just use ephemeral.

    I can't attach the template files here (only images allowed) but I can email them if you want, and the rest of the AZDO pipeline.

    Here's the templates for 1PConnect below

    -Steve

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: #{servicename}#-connect
    namespace: "#{namespace}#"
    spec:
    selector:
    matchLabels:
    app: #{servicename}#-connect
    template:
    metadata:
    labels:
    app: #{servicename}#-connect
    version: #{connect.version}#
    spec:
    volumes:
    - name: shared-data
    emptyDir: {}
    - name: credentials
    secret:
    secretName: op-credentials
    initContainers:
    - name: sqlite-permissions
    image: alpine:3.12
    command:
    - "/bin/sh"
    - "-c"
    args:
    - "mkdir -p /home/opuser/.op/data && chown -R 999 /home/opuser && chmod -R 700 /home/opuser && chmod -f -R 600 /home/opuser/.op/config || :"
    volumeMounts:
    - mountPath: /home/opuser/.op/data
    name: shared-data
    containers:
    - name: connect-api
    image: 1password/connect-api:#{connect.version}#
    resources:
    limits:
    memory: "128Mi"
    cpu: "0.2"
    ports:
    - containerPort: 8080
    name: connect-api
    env:
    - name: OP_SESSION
    value: /home/opuser/.config/1password-credentials.json
    volumeMounts:
    - mountPath: /home/opuser/.op/data
    name: shared-data
    - mountPath: /home/opuser/.config
    name: credentials
    readOnly: true
    livenessProbe:
    httpGet:
    path: /heartbeat
    port: 8080
    initialDelaySeconds: 3
    periodSeconds: 3
    readinessProbe:
    httpGet:
    path: /health
    port: 8080
    initialDelaySeconds: 5
    periodSeconds: 5
    - name: connect-sync
    image: 1password/connect-sync:latest
    resources:
    limits:
    memory: "128Mi"
    cpu: "0.2"
    ports:
    - containerPort: 8081
    name: connect-sync
    env:
    - name: OP_HTTP_PORT
    value: "8081"
    - name: OP_SESSION
    value: /home/opuser/.config/1password-credentials.json
    volumeMounts:
    - mountPath: /home/opuser/.op/data
    name: shared-data
    - mountPath: /home/opuser/.config
    name: credentials
    readOnly: true
    livenessProbe:
    httpGet:
    path: /heartbeat
    port: 8080
    initialDelaySeconds: 3
    periodSeconds: 3
    readinessProbe:
    httpGet:
    path: /health
    port: 8080
    initialDelaySeconds: 5
    periodSeconds: 5

    apiVersion: v1
    kind: Service
    metadata:
    name: #{servicename}#-connect
    spec:
    selector:
    app: #{servicename}#-connect
    ports:
    - port: 8080
    targetPort: connect-api
    name: connect-api
    protocol: TCP
    - port: 8081
    targetPort: connect-sync
    name: connect-sync
    protocol: TCP

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    name: "#{servicename}#-connect-ingress"
    namespace: "#{namespace}#"
    annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/default-backend: "#{servicename}#-connect"
    nginx.ingress.kubernetes.io/enable-access-log: "true"
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    external-dns.alpha.kubernetes.io/hostname: "#{connect.endpoint}#"
    cert-manager.io/cluster-issuer: "#{connect.certissuer}#"
    dns-provider: "#{connect.dnsprovider}#"
    spec:
    ingressClassName: "internal-nginx"
    rules:
    - host: "#{connect.endpoint}#"
    http:
    paths:
    - path: /
    pathType: Prefix
    backend:
    service:
    name: "#{servicename}#-connect"
    port:
    number: 8080
    tls:
    - hosts:
    - "#{connect.endpoint}#"
    secretName: "#{servicename}#-connect-cert"

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
    name: "#{servicename}#-connect-policy"
    spec:
    podSelector:
    matchLabels:
    app: "#{servicename}#-connect"
    policyTypes:
    - Ingress
    - Egress
    ingress:
    - ports:
    - protocol: TCP
    port: 80
    - protocol: TCP
    port: 443
    - protocol: TCP
    port: 8080
    - protocol: TCP
    port: 8081
    egress:
    - ports:
    - protocol: TCP
    port: 443
    - protocol: TCP
    port: 80

  • lonevvolf
    lonevvolf
    Community Member

    THANK YOU!

    The key here seems to be the folder permissions setting in the InitContainers. Why is it necessary to set this ourselves? Shouldn't this be part of the container spin-up?

  • sshipway
    sshipway
    Community Member

    I think the containers both run as non-privileged users (IE opuser) and so they cant set up the initial shared filesystem. The init container runs as root so can do the required initial chown before existing.

This discussion has been closed.