应该如何使用带有弹性beanstalk的amazon Web服务将秘密文件推送到EC2 Ruby on Rails应用程序?

时间:2022-03-20 11:19:05

How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk?

应该如何使用带有弹性beanstalk的amazon Web服务将秘密文件推送到EC2 Ruby on Rails应用程序?

I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using:

我将文件添加到git存储库,然后我推送到github,但我想将我的秘密文件保存在git存储库之外。我正在使用以下部署到aws:

git aws.push

The following files are in the .gitignore:

以下文件位于.gitignore中:

/config/database.yml
/config/initializers/omniauth.rb
/config/initializers/secret_token.rb

Following this link I attempted to add an S3 file to my deployment: http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html

在此链接之后,我尝试将S3文件添加到我的部署中:http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html

Quoting from that link:

引用该链接:

Example Snippet

The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp:

以下示例从Amazon S3存储桶下载zip文件并将其解压缩到/ etc / myapp中:

sources:  
    /etc/myapp: http://s3.amazonaws.com/mybucket/myobject 

Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .ebextensions directory:

按照这些说明,我将文件上传到S3存储桶,并将以下内容添加到.ebextensions目录中的private.config文件中:

sources:
  /var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz

That config.tar.gz file will extract to:

该config.tar.gz文件将提取到:

/config/database.yml
/config/initializers/omniauth.rb
/config/initializers/secret_token.rb

However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message:

但是,部署应用程序时,不会复制或提取S3主机上的config.tar.gz文件。我仍然收到无法找到database.yml的错误,并且EC2日志没有配置文件的记录,这里是错误消息:

Error message:
  No such file or directory - /var/app/current/config/database.yml
Exception class:
  Errno::ENOENT
Application root:
  /var/app/current

7 个解决方案

#1


1  

It is possible (and easy) to store sensitive files in S3 and copy them to your Beanstalk instances automatically.

可以(并且很容易)在S3中存储敏感文件,并自动将它们复制到您的Beanstalk实例。

When you create a Beanstalk application, an S3 bucket is automatically created. This bucket is used to store app versions, logs, metadata, etc.

创建Beanstalk应用程序时,会自动创建S3存储桶。此存储桶用于存储应用程序版本,日志,元数据等。

The default aws-elasticbeanstalk-ec2-role that is assigned to your Beanstalk environment has read access to this bucket.

分配给Beanstalk环境的缺省aws-elasticbeanstalk-ec2-role具有对此存储桶的读访问权限。

So all you need to do is put your sensitive files in that bucket (either at the root of the bucket or in any directory structure you desire), and create a .ebextension config file to copy them over to your EC2 instances.

因此,您需要做的就是将敏感文件放在该存储桶中(在存储桶的根目录或您希望的任何目录结构中),并创建.ebextension配置文件以将其复制到您的EC2实例。

Here is an example:

这是一个例子:

# .ebextensions/sensitive_files.config

Resources:
  AWSEBAutoScalingGroup:
    Metadata:
      AWS::CloudFormation::Authentication:
        S3Auth:
          type: "s3"
          buckets: ["elasticbeanstalk-us-east-1-XXX"] # Replace with your bucket name
          roleName: 
            "Fn::GetOptionSetting": 
              Namespace: "aws:autoscaling:launchconfiguration"
              OptionName: "IamInstanceProfile"
              DefaultValue: "aws-elasticbeanstalk-ec2-role" # This is the default role created for you when creating a new Beanstalk environment. Change it if you are using a custom role

files:
  /etc/pki/tls/certs/server.key: # This is where the file will be copied on the EC2 instances
    mode: "000400" # Apply restrictive permissions to the file
    owner: root # Or nodejs, or whatever suits your needs
    group: root # Or nodejs, or whatever suits your needs
    authentication: "S3Auth"
    source: https://s3-us-west-2.amazonaws.com/elasticbeanstalk-us-east-1-XXX/server.key # URL to the file in S3

This is documented here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-storingprivatekeys.html

这在此处记录:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-storingprivatekeys.html

#2


2  

The "right" way to do what I think that you want to do is to use IAM Roles. You can see a blog post about it here: http://aws.typepad.com/aws/aws-iam/

做我认为你想做的“正确”方式是使用IAM角色。你可以在这里看到一篇关于它的博客文章:http://aws.typepad.com/aws/aws-iam/

Basically, it allows you to launch an EC2 instance without putting any personal credential on any configuration file at all. When you launch the instance it will be assigned the given role (a set of permissions to use AWS resources), and a rotating credential will be put on the machine automatically with Amazon IAM.

基本上,它允许您启动EC2实例,而无需在任何配置文件上放置任何个人凭证。当您启动实例时,将为其分配给定角色(一组使用AWS资源的权限),并且将使用Amazon IAM自动将旋转凭证放在计算机上。

#3


2  

In order to have the .ebextension/*.config files be able to download the files from S3, they would have to be public. Given that they contain sensitive information, this is a Bad Idea.

为了让.ebextension / * .config文件能够从S3下载文件,它们必须是公共的。鉴于它们包含敏感信息,这是一个坏主意。

You can launch an Elastic Beanstalk instance with an instance role, and you can give that role permission to access the files in question. Unfortunately, the file: and sources: sections of the .ebextension/*.config files do not have direct access to use this role.

您可以使用实例角色启动Elastic Beanstalk实例,并且您可以授予该角色访问相关文件的权限。不幸的是,.ebextension / * .config文件的文件:和sources:部分没有直接访问权来使用此角色。

You should be able to write a simple script using the AWS::S3::S3Object class of the AWS SDK for Ruby to download the files, and use a command: instead of a sources:. If you don't specify credentials, the SDK will automatically try to use the role.

您应该能够使用AWS SDK for Ruby的AWS :: S3 :: S3Object类编写一个简单的脚本来下载文件,并使用命令:而不是源:。如果您未指定凭据,SDK将自动尝试使用该角色。

You would have to add a policy to your role which allows you to download the files you are interested in specifically. It would look like this:

您必须为您的角色添加一个策略,允许您专门下载您感兴趣的文件。它看起来像这样:

{                       
  "Statement": [
    {
    "Effect": "Allow",
     "Action": "s3:GetObject",
     "Resource": "arn:aws:s3:::mybucket/*"
    }
  ]
}

Then you could do something like this in your .config file

然后你可以在你的.config文件中做这样的事情

files:
  /usr/local/bin/downloadScript.rb: http://s3.amazonaws.com/mybucket/downloadScript.rb
commands:
  01-download-config:
    command: ruby /usr/local/downloadScript.rb http://s3.amazonaws.com/mybucket/config.tar.gz /tmp
  02-unzip-config:
    command: tar xvf /tmp/config.tar.gz
    cwd: /var/app/current

#4


1  

Using environment variables is a good approach. Reference passwords in the environment, so in a yaml file:

使用环境变量是一种很好的方法。在环境中引用密码,因此在yaml文件中:

password: <%= ENV['DATABASE_PASSWORD'] %>

Then set them on the instance directly with eb or the console.

然后使用eb或控制台直接在实例上设置它们。

You may be worried about having such sensitive information readily available in the environment. If a process compromises your system, it can probably obtain the password no matter where it is. This approach is used by many PaaS providers such as Heroku.

您可能担心在环境中随时可以获得这些敏感信息。如果某个进程危及您的系统,则无论在何处都可以获取密码。许多PaaS提供商(如Heroku)都使用此方法。

#5


0  

From there security document Amazon EC2 supports TrueCrypt for File Encryption and SSL for data in transit. Check out these documents

从那里开始,安全文档Amazon EC2支持用于文件加密的TrueCrypt和用于传输中的数据的SSL。看看这些文件

You can upload a server instance with an encrypted disk, or you can use a private repo (I think this costs for github but there are alternatives)

您可以上传带有加密磁盘的服务器实例,或者您可以使用私有仓库(我认为这是github的成本,但有其他选择)

#6


0  

I think the best way is not to hack AWS (set hooks, upload files). Just use ENV variables.

我认为最好的方法不是破解AWS(设置挂钩,上传文件)。只需使用ENV变量。

Use gem 'dot-env' for development (i.e. <%= ENV['LOCAL_DB_USERNAME'] %> in 'config/database.yml') and default AWS console to set variables in Beanstalk.

使用gem'dot-env'进行开发(即'config / database.yml'中的<%= ENV ['LOCAL_DB_USERNAME']%>)和默认的AWS控制台在Beanstalk中设置变量。

#7


0  

I know this is an old post but I couldn't find another answer anywhere so I burned the midnight oil to come up one. I hope it saves you several hours.

我知道这是一个很老的帖子,但我无法在任何地方找到另一个答案,所以我把午夜的油烧了一个。我希望它能为你节省几个小时。

I agreed with the devs that posted how much of a PITA it is to force devs to put ENV vars in their local dev database.yml. I know the dotenv gem is nice but you still have to maintain the ENV vars, which adds to the time it takes to bring up a station.

我同意发布了多少PITA的开发人员强制开发人员将ENV变量放入他们的本地开发者database.yml。我知道dotenv宝石很不错但是你仍然必须保持ENV变量,这会增加启动站点所需的时间。

My approach is to store a database.yml file on S3 in the bucket created by EB and then use a .ebextensions config file to create a script in the server's pre hook directory so it would be executed after the unzip to the staging directory but before the asset compilation--which, of course, blows up without a database.yml.

我的方法是在EB创建的存储桶中的S3上存储一个database.yml文件,然后使用.ebextensions配置文件在服务器的预挂钩目录中创建一个脚本,以便在解压缩到暂存目录之后但在之前执行资产编译 - 当然,没有database.yml就会爆炸。

The .config file is

.config文件是

# .ebextensions/sensitive_files.config
# Create a prehook command to copy database.yml from S3
files:
  "/opt/elasticbeanstalk/hooks/appdeploy/pre/03_copy_database.sh" :
    mode: "000755"
    owner: root
    group: root
    content: |
        #!/bin/bash
        set -xe
        EB_APP_STAGING_DIR=$(/opt/elasticbeanstalk/bin/get-config  container -k app_staging_dir)
        echo EB_APP_STAGING_DIR is  ${EB_APP_STAGING_DIR} >/tmp/copy.log
        ls -l ${EB_APP_STAGING_DIR} >>/tmp/copy.log
        aws s3 cp s3://elasticbeanstalk-us-east-1-XXX/database.yml ${EB_APP_STAGING_DIR}/config/database.yml >>/tmp/copy.log 2>>/tmp/copy.log

Notes

  • Of course the XXX in the bucket name is a sequence number created by EB. You'll have to check S3 to see the name of your bucket.
  • 当然,桶名称中的XXX是EB创建的序列号。您必须检查S3以查看您的存储桶的名称。
  • The name of the script file I create is important. These scripts are executed in alphabetical order so I was careful to name it so it sorts before the asset_compilation script.
  • 我创建的脚本文件的名称很重要。这些脚本按字母顺序执行,所以我小心地命名它,以便在asset_compilation脚本之前进行排序。
  • Obviously redirecting output to /tmp/copy.log is optional
  • 显然,将输出重定向到/tmp/copy.log是可选的

The post that helped me the most was at Customizing ElasticBeanstalk deployment hooks

帮助我最多的帖子是Customizing ElasticBeanstalk部署挂钩

posted by Kenta@AWS. Thanks Kenta!

由Kenta @ AWS发布。谢谢肯塔!

#1


1  

It is possible (and easy) to store sensitive files in S3 and copy them to your Beanstalk instances automatically.

可以(并且很容易)在S3中存储敏感文件,并自动将它们复制到您的Beanstalk实例。

When you create a Beanstalk application, an S3 bucket is automatically created. This bucket is used to store app versions, logs, metadata, etc.

创建Beanstalk应用程序时,会自动创建S3存储桶。此存储桶用于存储应用程序版本,日志,元数据等。

The default aws-elasticbeanstalk-ec2-role that is assigned to your Beanstalk environment has read access to this bucket.

分配给Beanstalk环境的缺省aws-elasticbeanstalk-ec2-role具有对此存储桶的读访问权限。

So all you need to do is put your sensitive files in that bucket (either at the root of the bucket or in any directory structure you desire), and create a .ebextension config file to copy them over to your EC2 instances.

因此,您需要做的就是将敏感文件放在该存储桶中(在存储桶的根目录或您希望的任何目录结构中),并创建.ebextension配置文件以将其复制到您的EC2实例。

Here is an example:

这是一个例子:

# .ebextensions/sensitive_files.config

Resources:
  AWSEBAutoScalingGroup:
    Metadata:
      AWS::CloudFormation::Authentication:
        S3Auth:
          type: "s3"
          buckets: ["elasticbeanstalk-us-east-1-XXX"] # Replace with your bucket name
          roleName: 
            "Fn::GetOptionSetting": 
              Namespace: "aws:autoscaling:launchconfiguration"
              OptionName: "IamInstanceProfile"
              DefaultValue: "aws-elasticbeanstalk-ec2-role" # This is the default role created for you when creating a new Beanstalk environment. Change it if you are using a custom role

files:
  /etc/pki/tls/certs/server.key: # This is where the file will be copied on the EC2 instances
    mode: "000400" # Apply restrictive permissions to the file
    owner: root # Or nodejs, or whatever suits your needs
    group: root # Or nodejs, or whatever suits your needs
    authentication: "S3Auth"
    source: https://s3-us-west-2.amazonaws.com/elasticbeanstalk-us-east-1-XXX/server.key # URL to the file in S3

This is documented here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-storingprivatekeys.html

这在此处记录:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-storingprivatekeys.html

#2


2  

The "right" way to do what I think that you want to do is to use IAM Roles. You can see a blog post about it here: http://aws.typepad.com/aws/aws-iam/

做我认为你想做的“正确”方式是使用IAM角色。你可以在这里看到一篇关于它的博客文章:http://aws.typepad.com/aws/aws-iam/

Basically, it allows you to launch an EC2 instance without putting any personal credential on any configuration file at all. When you launch the instance it will be assigned the given role (a set of permissions to use AWS resources), and a rotating credential will be put on the machine automatically with Amazon IAM.

基本上,它允许您启动EC2实例,而无需在任何配置文件上放置任何个人凭证。当您启动实例时,将为其分配给定角色(一组使用AWS资源的权限),并且将使用Amazon IAM自动将旋转凭证放在计算机上。

#3


2  

In order to have the .ebextension/*.config files be able to download the files from S3, they would have to be public. Given that they contain sensitive information, this is a Bad Idea.

为了让.ebextension / * .config文件能够从S3下载文件,它们必须是公共的。鉴于它们包含敏感信息,这是一个坏主意。

You can launch an Elastic Beanstalk instance with an instance role, and you can give that role permission to access the files in question. Unfortunately, the file: and sources: sections of the .ebextension/*.config files do not have direct access to use this role.

您可以使用实例角色启动Elastic Beanstalk实例,并且您可以授予该角色访问相关文件的权限。不幸的是,.ebextension / * .config文件的文件:和sources:部分没有直接访问权来使用此角色。

You should be able to write a simple script using the AWS::S3::S3Object class of the AWS SDK for Ruby to download the files, and use a command: instead of a sources:. If you don't specify credentials, the SDK will automatically try to use the role.

您应该能够使用AWS SDK for Ruby的AWS :: S3 :: S3Object类编写一个简单的脚本来下载文件,并使用命令:而不是源:。如果您未指定凭据,SDK将自动尝试使用该角色。

You would have to add a policy to your role which allows you to download the files you are interested in specifically. It would look like this:

您必须为您的角色添加一个策略,允许您专门下载您感兴趣的文件。它看起来像这样:

{                       
  "Statement": [
    {
    "Effect": "Allow",
     "Action": "s3:GetObject",
     "Resource": "arn:aws:s3:::mybucket/*"
    }
  ]
}

Then you could do something like this in your .config file

然后你可以在你的.config文件中做这样的事情

files:
  /usr/local/bin/downloadScript.rb: http://s3.amazonaws.com/mybucket/downloadScript.rb
commands:
  01-download-config:
    command: ruby /usr/local/downloadScript.rb http://s3.amazonaws.com/mybucket/config.tar.gz /tmp
  02-unzip-config:
    command: tar xvf /tmp/config.tar.gz
    cwd: /var/app/current

#4


1  

Using environment variables is a good approach. Reference passwords in the environment, so in a yaml file:

使用环境变量是一种很好的方法。在环境中引用密码,因此在yaml文件中:

password: <%= ENV['DATABASE_PASSWORD'] %>

Then set them on the instance directly with eb or the console.

然后使用eb或控制台直接在实例上设置它们。

You may be worried about having such sensitive information readily available in the environment. If a process compromises your system, it can probably obtain the password no matter where it is. This approach is used by many PaaS providers such as Heroku.

您可能担心在环境中随时可以获得这些敏感信息。如果某个进程危及您的系统,则无论在何处都可以获取密码。许多PaaS提供商(如Heroku)都使用此方法。

#5


0  

From there security document Amazon EC2 supports TrueCrypt for File Encryption and SSL for data in transit. Check out these documents

从那里开始,安全文档Amazon EC2支持用于文件加密的TrueCrypt和用于传输中的数据的SSL。看看这些文件

You can upload a server instance with an encrypted disk, or you can use a private repo (I think this costs for github but there are alternatives)

您可以上传带有加密磁盘的服务器实例,或者您可以使用私有仓库(我认为这是github的成本,但有其他选择)

#6


0  

I think the best way is not to hack AWS (set hooks, upload files). Just use ENV variables.

我认为最好的方法不是破解AWS(设置挂钩,上传文件)。只需使用ENV变量。

Use gem 'dot-env' for development (i.e. <%= ENV['LOCAL_DB_USERNAME'] %> in 'config/database.yml') and default AWS console to set variables in Beanstalk.

使用gem'dot-env'进行开发(即'config / database.yml'中的<%= ENV ['LOCAL_DB_USERNAME']%>)和默认的AWS控制台在Beanstalk中设置变量。

#7


0  

I know this is an old post but I couldn't find another answer anywhere so I burned the midnight oil to come up one. I hope it saves you several hours.

我知道这是一个很老的帖子,但我无法在任何地方找到另一个答案,所以我把午夜的油烧了一个。我希望它能为你节省几个小时。

I agreed with the devs that posted how much of a PITA it is to force devs to put ENV vars in their local dev database.yml. I know the dotenv gem is nice but you still have to maintain the ENV vars, which adds to the time it takes to bring up a station.

我同意发布了多少PITA的开发人员强制开发人员将ENV变量放入他们的本地开发者database.yml。我知道dotenv宝石很不错但是你仍然必须保持ENV变量,这会增加启动站点所需的时间。

My approach is to store a database.yml file on S3 in the bucket created by EB and then use a .ebextensions config file to create a script in the server's pre hook directory so it would be executed after the unzip to the staging directory but before the asset compilation--which, of course, blows up without a database.yml.

我的方法是在EB创建的存储桶中的S3上存储一个database.yml文件,然后使用.ebextensions配置文件在服务器的预挂钩目录中创建一个脚本,以便在解压缩到暂存目录之后但在之前执行资产编译 - 当然,没有database.yml就会爆炸。

The .config file is

.config文件是

# .ebextensions/sensitive_files.config
# Create a prehook command to copy database.yml from S3
files:
  "/opt/elasticbeanstalk/hooks/appdeploy/pre/03_copy_database.sh" :
    mode: "000755"
    owner: root
    group: root
    content: |
        #!/bin/bash
        set -xe
        EB_APP_STAGING_DIR=$(/opt/elasticbeanstalk/bin/get-config  container -k app_staging_dir)
        echo EB_APP_STAGING_DIR is  ${EB_APP_STAGING_DIR} >/tmp/copy.log
        ls -l ${EB_APP_STAGING_DIR} >>/tmp/copy.log
        aws s3 cp s3://elasticbeanstalk-us-east-1-XXX/database.yml ${EB_APP_STAGING_DIR}/config/database.yml >>/tmp/copy.log 2>>/tmp/copy.log

Notes

  • Of course the XXX in the bucket name is a sequence number created by EB. You'll have to check S3 to see the name of your bucket.
  • 当然,桶名称中的XXX是EB创建的序列号。您必须检查S3以查看您的存储桶的名称。
  • The name of the script file I create is important. These scripts are executed in alphabetical order so I was careful to name it so it sorts before the asset_compilation script.
  • 我创建的脚本文件的名称很重要。这些脚本按字母顺序执行,所以我小心地命名它,以便在asset_compilation脚本之前进行排序。
  • Obviously redirecting output to /tmp/copy.log is optional
  • 显然,将输出重定向到/tmp/copy.log是可选的

The post that helped me the most was at Customizing ElasticBeanstalk deployment hooks

帮助我最多的帖子是Customizing ElasticBeanstalk部署挂钩

posted by Kenta@AWS. Thanks Kenta!

由Kenta @ AWS发布。谢谢肯塔!