如何自动为Amazon EC2实例的卷创建快照?

时间:2023-01-26 11:54:59

I'm trying a script to backup a volume automatically.

我正在尝试使用脚本自动备份卷。

I follow this EBS-Snapshot.sh script as found on github:

我按照github上的EBS-Snapshot.sh脚本进行操作:

#!/bin/bash

# export EC2_HOME='/etc/ec2'  # Make sure you use the API tools, not the AMI tools
# export EC2_BIN=$EC2_HOME/bin
# export PATH=$PATH:$EC2_BIN
# I know all of the above is good to have solution, but not re-usable
# I have captured all of the above in a particular file and lemme execute it
source /etc/environment

PURGE_SNAPSHOT_IN_DAYS=10

EC2_BIN=$EC2_HOME/bin

# store the certificates and private key to your amazon account
MY_CERT='/path/to/certificate-file'
MY_KEY='/path/to/private-file'
# fetching the instance-id from the metadata repository
MY_INSTANCE_ID='your ec2-instance-id'

# temproary file
TMP_FILE='/tmp/rock-ebs-info.txt'

# get list of locally attached volumes via EC2 API:
$EC2_BIN/ec2-describe-volumes -C $MY_CERT -K $MY_KEY > $TMP_FILE
VOLUME_LIST=$(cat $TMP_FILE | grep ${MY_INSTANCE_ID} | awk '{ print $2 }')

sync

#create the snapshots
echo "Create EBS Volume Snapshot - Process started at $(date +%m-%d-%Y-%T)"
echo ""
echo $VOLUME_LIST
for volume in $(echo $VOLUME_LIST); do
   NAME=$(cat $TMP_FILE | grep Name | grep $volume | awk '{ print $5 }')
   DESC=$NAME-$(date +%m-%d-%Y)
   echo "Creating Snapshot for the volume: $volume with description: $DESC"
   echo "Snapshot info below:"
   $EC2_BIN/ec2-create-snapshot -C $MY_CERT -K $MY_KEY -d $DESC $volume
   echo ""
done

echo "Process ended at $(date +%m-%d-%Y-%T)"
echo ""

rm -f $TMP_FILE

#remove those snapshot which are $PURGE_SNAPSHOT_IN_DAYS old

I have the two files for X509 authentication, the instance ID but I don't understand the script and how to parameterise the volume that I want to backup.

我有两个用于X509身份验证的文件,实例ID,但我不了解脚本以及如何参数化我要备份的卷。

I don't understand the first line (source) and the EC2_BIN. With that configuration, it lists all the volumes and makes a snapshot of all these...

我不明白第一行(源)和EC2_BIN。通过该配置,它列出了所有这些卷并创建了所有这些卷的快照......

For the comment of the snapshot, how can I change this line to add text?

对于快照的注释,如何更改此行以添加文本?

DESC=$NAME-$(date +%m-%d-%Y)

I'm sorry to be a beginner but I don't understand the whole script

我很抱歉成为一名初学者,但我不理解整个剧本

EDIT :

I get this error with this new code:

我用这个新代码得到了这个错误:

Creating Snapshot for the volume: ([ec2-describe-volumes]) with description: -03-13-2012 Snapshot info below: Client.InvalidParameterValue: Value (([ec2-describe-volumes])) for parameter volumeId is invalid. Expected: 'vol-...'. Process ended at 03-13-2012-08:11:35 –

创建卷的快照:([ec2-describe-volumes])及其说明:-03-13-2012下面的快照信息:Client.InvalidParameterValue:参数volumeId的值(([ec2-describe-volumes]))无效。预期:'vol -...'。流程于03-13-2012-08:11:35结束 -

And this is the code :

这是代码:

#!/bin/bash

#Java home for debian default install path:
export JAVA_HOME=/usr
#add ec2 tools to default path
#export PATH=~/.ec2/bin:$PATH


#export EC2_HOME='/etc/ec2'  # Make sure you use the API tools, not the AMI tools
export EC2_BIN=/usr/bin/
#export PATH=$PATH:$EC2_BIN
# I know all of the above is good to have solution, but not re-usable
# I have captured all of the above in a particular file and lemme execute it
source /etc/environment

PURGE_SNAPSHOT_IN_DAYS=60

#EC2_BIN=$EC2_HOME/bin

# store the certificates and private key to your amazon account
MY_CERT='cert-xx.pem'
MY_KEY='pk-xx.pem'
# fetching the instance-id from the metadata repository

MY_INSTANCE_ID=`curl http://169.254.169.254/1.0/meta-data/instance-id`

# temproary file
TMP_FILE='/tmp/rock-ebs-info.txt'

# get list of locally attached volumes via EC2 API:
$EC2_BIN/ec2-describe-volumes -C $MY_CERT -K $MY_KEY > $TMP_FILE

#VOLUME_LIST=$(cat $TMP_FILE | grep ${MY_INSTANCE_ID} | awk '{ print $2 }')
VOLUME_LIST=(`ec2-describe-volumes --filter attachment.instance-id=$MY_INSTANCE_ID | awk '{ print $2 }'`)

sync

#create the snapshots
echo "Create EBS Volume Snapshot - Process started at $(date +%m-%d-%Y-%T)"
echo ""
echo $VOLUME_LIST
echo "-------------"
for volume in $(echo $VOLUME_LIST); do
   NAME=$(cat $TMP_FILE | grep Name | grep $volume | awk '{ print $5 }')
   DESC=$NAME-$(date +%m-%d-%Y)
   echo "Creating Snapshot for the volume: $volume with description: $DESC"
   echo "Snapshot info below:"
   $EC2_BIN/ec2-create-snapshot -C $MY_CERT -K $MY_KEY -d $DESC $volume
   echo ""
done

echo "Process ended at $(date +%m-%d-%Y-%T)"
echo ""

rm -f $TMP_FILE

#remove those snapshot which are $PURGE_SNAPSHOT_IN_DAYS old

7 个解决方案

#1


4  

Ok well,

  1. The first line where he runs (source). Thats the same as . /etc/environment. Anyways all he's doing is loading a file that has a list of environmental variables that amazon requires. At least this is what i assume.
  2. 他跑的第一行(来源)。那是一样的。在/ etc /环境。无论如何,他所做的只是加载一个文件,其中包含亚马逊所需的环境变量列表。至少这是我的假设。

  3. He's making this script much more complicated than it needs to be. He doesn't need to run the ec2-describe-instances command and save the output to a file then grep the output etc....
  4. 他使这个脚本比它需要的复杂得多。他不需要运行ec2-describe-instances命令并将输出保存到文件然后grep输出等....

  5. You can put whatever you want for the DESC. You can just replace everything to the right of the = to whatever text you want. Just make sure to put quotes around it.
  6. 你可以把任何你想要的东西用于DESC。您可以将=右侧的所有内容替换为您想要的任何文本。只要确保在它周围加上引号。

I would change two things about this script.

我会改变关于这个脚本的两件事。

  1. Get the InstanceId at runtime in the script. Don't hard code it into the script. This line will work no matter where the script is running.

    在脚本中获取运行时的InstanceId。不要硬编码到脚本中。无论脚本在何处运行,此行都将起作用。

    MY_INSTANCE_ID=`curl http://169.254.169.254/1.0/meta-data/instance-id`
    
  2. Instead of calling ec2-describe-volumes and saving the output to a temp file etc... Just use a filter on the command and tell it which instance id you want.

    而不是调用ec2-describe-volumes并将输出保存到临时文件等...只需在命令上使用过滤器并告诉它您想要的实例ID。

    VOLUME_LIST=(`ec2-describe-volumes --filter attachment.instance-id=$MY_INSTANCE_ID | awk '{ print $2 }'`)
    

#2


9  

the above solution did not work completely for me. After I hour chat with the amazon support, I have now this working script, which will always create snapshots of all volumes attached to the current instance:

上述解决方案对我来说并不完全有效。在我与亚马逊支持聊天之后,我现在有了这个工作脚本,它将始终创建附加到当前实例的所有卷的快照:

#!/bin/bash

# Set Environment Variables as cron doesn't load them
export JAVA_HOME=/usr/lib/jvm/java-6-sun
export EC2_HOME=/usr
export EC2_BIN=/usr/bin/
export PATH=$PATH:$EC2_HOME/bin
export EC2_CERT=/home/ubuntu/.ec2/cert-SDFRTWFASDFQFEF.pem
export EC2_PRIVATE_KEY=/home/ubuntu/.ec2/pk-SDFRTWFASDFQFEF.pem
export EC2_URL=https://eu-west-1.ec2.amazonaws.com # Setup your availability zone here

# Get instance id of the current server instance
MY_INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
# get list of locally attached volumes 
VOLUMES=$(ec2-describe-volumes | grep ${MY_INSTANCE_ID} | awk '{ print $2 }')
echo "Instance-Id: $MY_INSTANCE_ID" 

    # Create a snapshot for all locally attached volumes
    LOG_FILE=/home/ubuntu/ebsbackup/ebsbackup.log
    echo "********** Starting backup for instance $MY_INSTANCE_ID" >> $LOG_FILE
    for VOLUME in $(echo $VOLUMES); do
        echo "Backup Volume:   $VOLUME" >> $LOG_FILE
        ec2-consistent-snapshot --aws-access-key-id ASDASDASDASD --aws-secret-access-key asdfdsfasdfasdfasdfasdf --mysql --mysql-host localhost --mysql-username root --mysql-password asdfasdfasdfasdfd --description "Backup ($MY_INSTANCE_ID) $(date +'%Y-%m-%d %H:%M:%S')" --region eu-west-1 $VOLUME
done
echo "********** Ran backup: $(date)" >> $LOG_FILE
echo "Completed"

I setup a cronjob in /etc/cron.d/ebsbackup

我在/etc/cron.d/ebsbackup中设置了一个cronjob

01 * * * * ubuntu /home/ubuntu/.ec2/myscriptname

This works pretty good for me... :-)

这对我来说非常好...... :-)

Hope this helps for you, Sebastian

希望这对你有帮助,塞巴斯蒂安

#3


1  

I came across with many people looking for a tool to administrate the EBS snapshots. I found several tools in internet but they were just scripts and incomplete solutions. Finally I decided to create a program more flexible, centralized and easy to administrate.

我遇到很多人在寻找管理EBS快照的工具。我在互联网上找到了几个工具但它们只是脚本和不完整的解决方案。最后,我决定创建一个更灵活,集中且易于管理的程序。

The idea is to have a centralized program to rule all the EBS snapshots (local to the instance or remotes)

我们的想法是拥有一个集中程序来统治所有EBS快照(实例或远程控制器的本地)

I have created a small Perl program, https://github.com/sciclon/EBS_Snapshots

我创建了一个小的Perl程序,https://github.com/sciclon/EBS_Snapshots

Some features: * Program runs in daemon mode or script mode (crontab)

一些功能:*程序以守护进程模式或脚本模式运行(crontab)

  • You can chose only local attached volumes or remotes as well

    您也可以只选择本地附加卷或遥控器

  • You can define log file

    您可以定义日志文件

  • You can define for each volume quantity of snapshots

    您可以为每个卷定义快照数量

  • You can define for each volume the frequency among them

    您可以为每个卷定义它们之间的频率

  • Frequency and quantity will work like a "round-robin" when it reaches the limit removing the oldest snapshot.

    频率和数量在达到删除最旧快照的限制时将像“循环”一样工作。

  • you can readjust in one step the quantity I mean if you have 6 snapshots and you modify the quantity in 3 the process will readjust it automatically.

    你可以一步重新调整我的意思,如果你有6个快照,你在3中修改数量,过程将自动重新调整它。

  • You can define a "prescript" execution, You can add your code to execute before executing the snapshot, for example you would like to try to umount the volume or stop some service, or maybe to check the instance load. The parent process will wait for the exit code, "0" means success, you can define if continue or not depending on the exit code.

    您可以定义“预定”执行,您可以在执行快照之前添加要执行的代码,例如,您希望尝试卸载卷或停止某些服务,或者可能检查实例加载。父进程将等待退出代码,“0”表示成功,您可以根据退出代码定义是否继续。

  • You can define a "postscript" execution to execute any scrip after taking the snapshot (for example a email telling you about it)

    您可以定义“postscript”执行以在拍摄快照后执行任何脚本(例如,一封电子邮件告诉您)

  • You can add "Protected Snapshots" to skip the snapshot you define, I mean they will be in "read only" and they will never been erased.

    您可以添加“受保护的快照”以跳过您定义的快照,我的意思是它们将处于“只读”状态,并且它们永远不会被删除。

  • you can reconfigure the script "on the fly" when it is running in daemon mode, the script accepts signals and IPC.

    你可以在守护进程模式下“动态”重新配置脚本,脚本接受信号和IPC。

  • It has a "local-cache" to avoid requesting the API several times. You can add or modify any configuration in the config file and reload without killing the process.

    它有一个“本地缓存”,以避免多次请求API。您可以在配置文件中添加或修改任何配置并重新加载,而不会终止进程。

#4


0  

Heres a function i wrote in Ruby to snapshot all volumes on all instances in all regions.

这是我在Ruby中编写的一个函数,用于对所有区域中所有实例的所有卷进行快照。

require 'aws-sdk'

def snapshot_all_attached_volumes(region)
  # For every instance in this region
  AWS::EC2.new(:region => region).instances.each do |instance|
    # get all the attached volumes
    instance.attachments.each do |mountpoint, attachment|
      # and create snapshots
      attachment.volume.create_snapshot(description = "Automated snapshot #{HOSTNAME}:#{$0}")
    end
  end
end

regions = AWS::EC2.regions.map(&:name)
regions.each do |region| 
  begin
    snapshot_all_attached_volumes(region)
    # delete_all_old_snapshots(region) 
  rescue
    puts "#{$!}"
  end 
end

#5


0  

I don't know about you, but I prefer to make AMI instead snapshot. This script came from a idea from Craig, an employee of Amazon. They were developing a snapshot script called Arche. This script is simple - you mark a tag in an EC2 Instance and tag Ec2 are AMIed. I tested it in my environment. You can change the commands in this script to backup the snapshot, too.

我不了解你,但我更喜欢让AMI代替快照。这个脚本来自亚马逊员工Craig的想法。他们正在开发一个名为Arche的快照脚本。这个脚本很简单 - 你在EC2实例中标记一个标签,标签Ec2是AMIed。我在我的环境中测试过它。您也可以更改此脚本中的命令来备份快照。

Before you run this, config the linux environment variables with cert and pk keys.

在运行此命令之前,请使用cert和pk密钥配置linux环境变量。

#!/bin/bash
echo "AMI Backup is starting..."
echo "taking AMI Backup..."

day_of_year=$(date +%j)
week_of_year=$(date +%U)
week_of_year=$( printf "%.0f" $week_of_year )
year=$(date +%Y)

for INST in $(ec2-describe-instances --region=sa-east-1 --filter "tag:Backup=On" | awk '/^INSTANCE/ {print $2}')
do
        start_time=$(date +%R)
        ami=$(ec2-create-image $INST --name $INST$week_of_year --no-reboot | awk '{print $2}')
        ec2-create-tags $ami --tag Day_Year=$day_of_year > /dev/null
        ec2-create-tags $ami --tag Week_Year=$week_of_year > /dev/null
        ec2-create-tags $ami --tag Src_Instance=$INST > /dev/null
        ec2-create-tags $ami --tag Start_Time=$start_time > /dev/null
        end_time=$(date +%R)
        ec2-create-tags $ami --tag End_Time=$end_time > /dev/null
        echo "Created AMI $ami for volume $INST"
done

year=$(date +%Y)
expire_day=`expr $day_of_year  -  2`
expire_week=`expr $week_of_year  -  2`


echo "identifying AMI to be deleted"
for delete in $(ec2-describe-images --filter "tag:Week_Year=$expire_week" | awk '{ print $2;exit;}')
do
        ec2dereg $delete
        echo "deleted $delete"
done

#6


0  

I think the best way now is to use AWS Lambda to take snapshots of your EC2 instances. you can find more details from this link

我认为现在最好的方法是使用AWS Lambda来拍摄EC2实例的快照。您可以从此链接中找到更多详细信息

http://www.iwss.co.uk/ec2-instance-snapshot-through-aws-lambda-function-using-phyton-2-7/

#7


0  

Create a rule that takes snapshots on a schedule. You can use a rate expression or a cron expression to specify the schedule. For more information

创建一个按计划拍摄快照的规则。您可以使用速率表达式或cron表达式来指定计划。欲获得更多信息

More information To create a rule

更多信息创建规则

  1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.

    通过以下网址打开CloudWatch控制台:https://console.aws.amazon.com/cloudwatch/。

  2. In the navigation pane, choose Events, Create rule.

    在导航窗格中,选择“事件”,“创建规则”。

For Event Source, do the following:

对于事件源,请执行以下操作:

a. Choose Schedule.

b. Choose Fixed rate of and specify the schedule interval (for example, 5 minutes). Alternatively, choose Cron expression and specify a cron expression (for example, every 15 minutes Monday through Friday, starting at the current time).
  1. For Targets, choose Add target and then select EC2 Create Snapshot API call.

    对于Targets,选择Add target,然后选择EC2 Create Snapshot API call。

  2. For Volume ID, type the volume ID of the targeted Amazon EBS volume.

    对于Volume ID,键入目标Amazon EBS卷的卷ID。

  3. For AWS permissions, choose the option to create a new role. The new role grants the built-in target permissions to access resources on your behalf.

    对于AWS权限,请选择创建新角色的选项。新角色授予内置目标权限以代表您访问资源。

Choose Configure details.

选择配置细节。

For Rule definition, type a name and description for the rule.

对于规则定义,键入规则的名称和描述。

Choose Create rule

选择创建规则

#1


4  

Ok well,

  1. The first line where he runs (source). Thats the same as . /etc/environment. Anyways all he's doing is loading a file that has a list of environmental variables that amazon requires. At least this is what i assume.
  2. 他跑的第一行(来源)。那是一样的。在/ etc /环境。无论如何,他所做的只是加载一个文件,其中包含亚马逊所需的环境变量列表。至少这是我的假设。

  3. He's making this script much more complicated than it needs to be. He doesn't need to run the ec2-describe-instances command and save the output to a file then grep the output etc....
  4. 他使这个脚本比它需要的复杂得多。他不需要运行ec2-describe-instances命令并将输出保存到文件然后grep输出等....

  5. You can put whatever you want for the DESC. You can just replace everything to the right of the = to whatever text you want. Just make sure to put quotes around it.
  6. 你可以把任何你想要的东西用于DESC。您可以将=右侧的所有内容替换为您想要的任何文本。只要确保在它周围加上引号。

I would change two things about this script.

我会改变关于这个脚本的两件事。

  1. Get the InstanceId at runtime in the script. Don't hard code it into the script. This line will work no matter where the script is running.

    在脚本中获取运行时的InstanceId。不要硬编码到脚本中。无论脚本在何处运行,此行都将起作用。

    MY_INSTANCE_ID=`curl http://169.254.169.254/1.0/meta-data/instance-id`
    
  2. Instead of calling ec2-describe-volumes and saving the output to a temp file etc... Just use a filter on the command and tell it which instance id you want.

    而不是调用ec2-describe-volumes并将输出保存到临时文件等...只需在命令上使用过滤器并告诉它您想要的实例ID。

    VOLUME_LIST=(`ec2-describe-volumes --filter attachment.instance-id=$MY_INSTANCE_ID | awk '{ print $2 }'`)
    

#2


9  

the above solution did not work completely for me. After I hour chat with the amazon support, I have now this working script, which will always create snapshots of all volumes attached to the current instance:

上述解决方案对我来说并不完全有效。在我与亚马逊支持聊天之后,我现在有了这个工作脚本,它将始终创建附加到当前实例的所有卷的快照:

#!/bin/bash

# Set Environment Variables as cron doesn't load them
export JAVA_HOME=/usr/lib/jvm/java-6-sun
export EC2_HOME=/usr
export EC2_BIN=/usr/bin/
export PATH=$PATH:$EC2_HOME/bin
export EC2_CERT=/home/ubuntu/.ec2/cert-SDFRTWFASDFQFEF.pem
export EC2_PRIVATE_KEY=/home/ubuntu/.ec2/pk-SDFRTWFASDFQFEF.pem
export EC2_URL=https://eu-west-1.ec2.amazonaws.com # Setup your availability zone here

# Get instance id of the current server instance
MY_INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
# get list of locally attached volumes 
VOLUMES=$(ec2-describe-volumes | grep ${MY_INSTANCE_ID} | awk '{ print $2 }')
echo "Instance-Id: $MY_INSTANCE_ID" 

    # Create a snapshot for all locally attached volumes
    LOG_FILE=/home/ubuntu/ebsbackup/ebsbackup.log
    echo "********** Starting backup for instance $MY_INSTANCE_ID" >> $LOG_FILE
    for VOLUME in $(echo $VOLUMES); do
        echo "Backup Volume:   $VOLUME" >> $LOG_FILE
        ec2-consistent-snapshot --aws-access-key-id ASDASDASDASD --aws-secret-access-key asdfdsfasdfasdfasdfasdf --mysql --mysql-host localhost --mysql-username root --mysql-password asdfasdfasdfasdfd --description "Backup ($MY_INSTANCE_ID) $(date +'%Y-%m-%d %H:%M:%S')" --region eu-west-1 $VOLUME
done
echo "********** Ran backup: $(date)" >> $LOG_FILE
echo "Completed"

I setup a cronjob in /etc/cron.d/ebsbackup

我在/etc/cron.d/ebsbackup中设置了一个cronjob

01 * * * * ubuntu /home/ubuntu/.ec2/myscriptname

This works pretty good for me... :-)

这对我来说非常好...... :-)

Hope this helps for you, Sebastian

希望这对你有帮助,塞巴斯蒂安

#3


1  

I came across with many people looking for a tool to administrate the EBS snapshots. I found several tools in internet but they were just scripts and incomplete solutions. Finally I decided to create a program more flexible, centralized and easy to administrate.

我遇到很多人在寻找管理EBS快照的工具。我在互联网上找到了几个工具但它们只是脚本和不完整的解决方案。最后,我决定创建一个更灵活,集中且易于管理的程序。

The idea is to have a centralized program to rule all the EBS snapshots (local to the instance or remotes)

我们的想法是拥有一个集中程序来统治所有EBS快照(实例或远程控制器的本地)

I have created a small Perl program, https://github.com/sciclon/EBS_Snapshots

我创建了一个小的Perl程序,https://github.com/sciclon/EBS_Snapshots

Some features: * Program runs in daemon mode or script mode (crontab)

一些功能:*程序以守护进程模式或脚本模式运行(crontab)

  • You can chose only local attached volumes or remotes as well

    您也可以只选择本地附加卷或遥控器

  • You can define log file

    您可以定义日志文件

  • You can define for each volume quantity of snapshots

    您可以为每个卷定义快照数量

  • You can define for each volume the frequency among them

    您可以为每个卷定义它们之间的频率

  • Frequency and quantity will work like a "round-robin" when it reaches the limit removing the oldest snapshot.

    频率和数量在达到删除最旧快照的限制时将像“循环”一样工作。

  • you can readjust in one step the quantity I mean if you have 6 snapshots and you modify the quantity in 3 the process will readjust it automatically.

    你可以一步重新调整我的意思,如果你有6个快照,你在3中修改数量,过程将自动重新调整它。

  • You can define a "prescript" execution, You can add your code to execute before executing the snapshot, for example you would like to try to umount the volume or stop some service, or maybe to check the instance load. The parent process will wait for the exit code, "0" means success, you can define if continue or not depending on the exit code.

    您可以定义“预定”执行,您可以在执行快照之前添加要执行的代码,例如,您希望尝试卸载卷或停止某些服务,或者可能检查实例加载。父进程将等待退出代码,“0”表示成功,您可以根据退出代码定义是否继续。

  • You can define a "postscript" execution to execute any scrip after taking the snapshot (for example a email telling you about it)

    您可以定义“postscript”执行以在拍摄快照后执行任何脚本(例如,一封电子邮件告诉您)

  • You can add "Protected Snapshots" to skip the snapshot you define, I mean they will be in "read only" and they will never been erased.

    您可以添加“受保护的快照”以跳过您定义的快照,我的意思是它们将处于“只读”状态,并且它们永远不会被删除。

  • you can reconfigure the script "on the fly" when it is running in daemon mode, the script accepts signals and IPC.

    你可以在守护进程模式下“动态”重新配置脚本,脚本接受信号和IPC。

  • It has a "local-cache" to avoid requesting the API several times. You can add or modify any configuration in the config file and reload without killing the process.

    它有一个“本地缓存”,以避免多次请求API。您可以在配置文件中添加或修改任何配置并重新加载,而不会终止进程。

#4


0  

Heres a function i wrote in Ruby to snapshot all volumes on all instances in all regions.

这是我在Ruby中编写的一个函数,用于对所有区域中所有实例的所有卷进行快照。

require 'aws-sdk'

def snapshot_all_attached_volumes(region)
  # For every instance in this region
  AWS::EC2.new(:region => region).instances.each do |instance|
    # get all the attached volumes
    instance.attachments.each do |mountpoint, attachment|
      # and create snapshots
      attachment.volume.create_snapshot(description = "Automated snapshot #{HOSTNAME}:#{$0}")
    end
  end
end

regions = AWS::EC2.regions.map(&:name)
regions.each do |region| 
  begin
    snapshot_all_attached_volumes(region)
    # delete_all_old_snapshots(region) 
  rescue
    puts "#{$!}"
  end 
end

#5


0  

I don't know about you, but I prefer to make AMI instead snapshot. This script came from a idea from Craig, an employee of Amazon. They were developing a snapshot script called Arche. This script is simple - you mark a tag in an EC2 Instance and tag Ec2 are AMIed. I tested it in my environment. You can change the commands in this script to backup the snapshot, too.

我不了解你,但我更喜欢让AMI代替快照。这个脚本来自亚马逊员工Craig的想法。他们正在开发一个名为Arche的快照脚本。这个脚本很简单 - 你在EC2实例中标记一个标签,标签Ec2是AMIed。我在我的环境中测试过它。您也可以更改此脚本中的命令来备份快照。

Before you run this, config the linux environment variables with cert and pk keys.

在运行此命令之前,请使用cert和pk密钥配置linux环境变量。

#!/bin/bash
echo "AMI Backup is starting..."
echo "taking AMI Backup..."

day_of_year=$(date +%j)
week_of_year=$(date +%U)
week_of_year=$( printf "%.0f" $week_of_year )
year=$(date +%Y)

for INST in $(ec2-describe-instances --region=sa-east-1 --filter "tag:Backup=On" | awk '/^INSTANCE/ {print $2}')
do
        start_time=$(date +%R)
        ami=$(ec2-create-image $INST --name $INST$week_of_year --no-reboot | awk '{print $2}')
        ec2-create-tags $ami --tag Day_Year=$day_of_year > /dev/null
        ec2-create-tags $ami --tag Week_Year=$week_of_year > /dev/null
        ec2-create-tags $ami --tag Src_Instance=$INST > /dev/null
        ec2-create-tags $ami --tag Start_Time=$start_time > /dev/null
        end_time=$(date +%R)
        ec2-create-tags $ami --tag End_Time=$end_time > /dev/null
        echo "Created AMI $ami for volume $INST"
done

year=$(date +%Y)
expire_day=`expr $day_of_year  -  2`
expire_week=`expr $week_of_year  -  2`


echo "identifying AMI to be deleted"
for delete in $(ec2-describe-images --filter "tag:Week_Year=$expire_week" | awk '{ print $2;exit;}')
do
        ec2dereg $delete
        echo "deleted $delete"
done

#6


0  

I think the best way now is to use AWS Lambda to take snapshots of your EC2 instances. you can find more details from this link

我认为现在最好的方法是使用AWS Lambda来拍摄EC2实例的快照。您可以从此链接中找到更多详细信息

http://www.iwss.co.uk/ec2-instance-snapshot-through-aws-lambda-function-using-phyton-2-7/

#7


0  

Create a rule that takes snapshots on a schedule. You can use a rate expression or a cron expression to specify the schedule. For more information

创建一个按计划拍摄快照的规则。您可以使用速率表达式或cron表达式来指定计划。欲获得更多信息

More information To create a rule

更多信息创建规则

  1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.

    通过以下网址打开CloudWatch控制台:https://console.aws.amazon.com/cloudwatch/。

  2. In the navigation pane, choose Events, Create rule.

    在导航窗格中,选择“事件”,“创建规则”。

For Event Source, do the following:

对于事件源,请执行以下操作:

a. Choose Schedule.

b. Choose Fixed rate of and specify the schedule interval (for example, 5 minutes). Alternatively, choose Cron expression and specify a cron expression (for example, every 15 minutes Monday through Friday, starting at the current time).
  1. For Targets, choose Add target and then select EC2 Create Snapshot API call.

    对于Targets,选择Add target,然后选择EC2 Create Snapshot API call。

  2. For Volume ID, type the volume ID of the targeted Amazon EBS volume.

    对于Volume ID,键入目标Amazon EBS卷的卷ID。

  3. For AWS permissions, choose the option to create a new role. The new role grants the built-in target permissions to access resources on your behalf.

    对于AWS权限,请选择创建新角色的选项。新角色授予内置目标权限以代表您访问资源。

Choose Configure details.

选择配置细节。

For Rule definition, type a name and description for the rule.

对于规则定义,键入规则的名称和描述。

Choose Create rule

选择创建规则