I currently use the aws.s3.Folder deployable (overlay of file.Folder) but the behaviour of the deployment is not pretty : XLD destroy all of the files deployed and then push the package files.


The behaviour of the "aws s3 sync" is:
- Add the files present in local but not in remote.
- Update the files present in local and in remote
- Delete the files on remote when they don't exist on local

Comments

  • Thank you for the request.

    We have a few follow-up questions:
    - Could you tell us what the primary need for this change is? Is it reducing the deployment time or data cost for S3?
    - Do you deploy many small files or a few big ones?
    - We had a similar request from your company regarding file.Folder which is more generic CI (configuration item). Is this need AWS use case only or has a broader context as well?
    - Would use this setting for all the deployments or some subset of the deployments?

    Please don't forget to that you can vote on the items you add to IdeaSpace. This can be done by clicking on the "Vote for" button/label.

  • The requirement steps from one end user request where he wanted to optimize speed of S3 bucket upload, because currently xld deletes all files and put new ones available in bucket which is time consuming and destructive. User needs a similar functionality like 'aws s3 synch' command. See some advantages for synch here . https://spacelift.io/blog/aws-s3-sync

    since we have many teams which has both variations i.e small number of big files and large number of tiny files.

    Similarly it would be good to have a better control of folder/file copy operation as we have some use-cases where we just want to copy changed/incremental items to server.

    i.e, currently if you copy A , B, C and then later if you copy B* (* means changed) , E, F then XLD will try to undeploy A , C and copy B*, E, F . While our desired behaviour is to just copy B* , E and F without touching existing A and C