如何将可查看文件上载到Amazon S3?

时间:2021-10-03 16:25:49

Let me start of by saying that I am normally very reluctant to post this questions as I always feel that there's an answer to everything SOMEWHERE on the internet. After spending countless hours looking for an answer to this question, I've finally given up on this statement however.

首先,我想说,我通常非常不愿意把这些问题贴在网上,因为我总觉得网上的任何事情都有答案。在花了无数的时间寻找这个问题的答案之后,我终于放弃了这个说法。

Assumption

This works:

如此:

s3.getSignedUrl('putObject', params);

What am I trying to do?

  1. Upload a file via PUT (from the client-side) to Amazon S3 using the getSignedUrl method
  2. 使用getSignedUrl方法将文件(从客户端)上传到Amazon S3。
  3. Allow anyone to view the file that was uploaded to S3
  4. 允许任何人查看上传至S3的文件

Note: If there's an easier way to allow client side (iPhone) uploads to Amazon S3 with pre-signed URLs (and without exposing credentials client-side) I'm all ears.

注意:如果有一种更简单的方法可以让客户端(iPhone)上传至Amazon S3,并使用预先签名的url(并且不暴露凭证客户端),那么我就会听了。

Main Problems*

  1. When viewing the AWS Management Console, the file uploaded has blank Permissions and Metadata set.
  2. 在查看AWS管理控制台时,上传的文件具有空白权限和元数据集。
  3. When viewing the uploaded file (i.e. by double clicking the file in AWS Management Console) I get an AccessDenied error.
  4. 当查看上传的文件(即双击AWS管理控制台中的文件)时,会得到一个AccessDenied错误。

What have I tried?

Try #1: My original code

In NodeJS I generate a pre-signed URL like so:

在NodeJS中,我生成一个预签名URL,如下所示:

var params = {Bucket: mybucket, Key: "test.jpg", Expires: 600};
s3.getSignedUrl('putObject', params, function (err, url){
  console.log(url); // this is the pre-signed URL
});

The pre-signed URL looks something like this:

预签名URL如下所示:

https://mybucket.s3.amazonaws.com/test.jpg?AWSAccessKeyId=AABFBIAWAEAUKAYGAFAA&Expires=1391069292&Signature=u%2BrqUtt3t6BfKHAlbXcZcTJIOWQ%3D

Now I upload the file via PUT

现在我通过PUT上传文件

curl -v -T myimage.jpg https://mybucket.s3.amazonaws.com/test.jpg?AWSAccessKeyId=AABFBIAWAEAUKAYGAFAA&Expires=1391069292&Signature=u%2BrqUtt3t6BfKHAlbXcZcTJIOWQ%3D

PROBLEM
I get the *Main Problems listed above

问题我得到了上面列出的主要问题

Try #2: Adding Content-Type and ACL on PUT

I've also tried adding the Content-Type and x-amz-acl in my code by replacing the params like so:

我还尝试在代码中添加内容类型和x-amz-acl,方法是替换参数,如下所示:

var params = {Bucket: mybucket, Key: "test.jpg", Expires: 600, ACL: "public-read-write", ContentType: "image/jpeg"};

Then I try a good ol' PUT:

然后我尝试了一个好的ol' PUT:

curl -v -H "image/jpeg" -T myimage.jpg https://mybucket.s3.amazonaws.com/test.jpg?AWSAccessKeyId=AABFBIAWAEAUKAYGAFAA&Content-Type=image%2Fjpeg&Expires=1391068501&Signature=0yF%2BmzDhyU3g2hr%2BfIcVSnE22rY%3D&x-amz-acl=public-read-write

PROBLEM
My terminal outputs some errors:

问题我的终端输出一些错误:

-bash: Content-Type=image%2Fjpeg: command not found
-bash: x-amz-acl=public-read-write: command not found

And I also get the *Main Problems listed above.

我还得到了上面列出的主要问题。

Try #3: Modifying Bucket Permissions to be public

All of the items listed below are ticked in the AWS Management Console)

下面列出的所有项目都在AWS管理控制台中进行了标记。

Grantee: Everyone can [List, Upload/Delete, View Permissions, Edit Permissions]
Grantee: Authenticated Users can [List, Upload/Delete, View Permissions, Edit Permissions]

Bucket Policy

{
"Version": "2012-10-17",
"Statement": [
    {
        "Sid": "Stmt1390381397000",
        "Effect": "Allow",
        "Principal": {
            "AWS": "*"
        },
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::mybucket/*"
    }
]
}

Try #4: Setting IAM permissions

I set the user policy to be this:

我将用户策略设置为:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": "*"
    }
  ]
}

AuthenticatedUsers group policy to be this:

经过身份验证的用户组策略如下:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1391063032000",
      "Effect": "Allow",
      "Action": [
        "s3:*"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

Try #5: Setting CORS policy

I set the CORS policy to this:

我制定了CORS政策:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

And... Now I'm here.

和…现在我在这里。

7 个解决方案

#1


10  

Update

更新

I have bad news. According to release notes of SDK 2.1.6 at http://aws.amazon.com/releasenotes/1473534964062833:

我有坏消息。根据SDK 2.1.6的发布说明,http://aws.amazon.com/releasenotes/1473534964062833:

"The SDK will now throw an error if ContentLength is passed into an 
Amazon S3 presigned URL (AWS.S3.getSignedUrl()). Passing a 
ContentLength is not supported by the SDK, since it is not enforced on 
S3's side given the way the SDK is currently generating these URLs. 
See GitHub issue #457."

I have found on some occassions, ContentLength must be included (specifically if your client passes it so the signatures will match), then on other occassions, getSignedUrl will complain if you include ContentLength with a parameter error: "contentlength is not supported in presigned urls". I noticed that the behavior would change when I changed the machine which was making the call. Presumably the other machine made a connection to another Amazon server in the farm.

我发现在某些情况下,必须包含ContentLength(特别是如果您的客户端通过它,所以签名会匹配),那么在其他的occassions中,getSignedUrl会抱怨如果您包含了一个参数错误的ContentLength:“在预先签名的url中不支持ContentLength”。我注意到当我改变正在打电话的机器时,行为会发生变化。假设另一台机器与农场中的另一台Amazon服务器建立了连接。

I can only guess why the behavior exists in some cases, but not in others. Perhaps not all of Amazon's servers have been fully upgraded? In either case, to handle this problem, I now make an attempt using ContentLength and if it gives me the parameter error, then I call the getSignedUrl again without it. This is a work-around to deal with this strange behavior with the SDK.

我只能猜测为什么这种行为在某些情况下存在,而在其他情况下不存在。也许并不是亚马逊所有的服务器都已经完全升级了?在任何一种情况下,为了处理这个问题,我现在尝试使用ContentLength,如果它给了我参数错误,那么我将再次调用getSignedUrl。这是用SDK处理这种奇怪行为的一种变通方法。

A little example... not very pretty to look at but you get the idea:

一个小例子…看起来不太漂亮,但你能想到:

MediaBucketManager.getPutSignedUrl = function ( params, next ) {
    var _self = this;
    _self._s3.getSignedUrl('putObject', params, function ( error, data ) {
        if (error) {
            console.log("An error occurred retrieving a signed url for putObject", error);
            // TODO: build contextual error
            if (error.code == "UnexpectedParameter" && error.message.search("ContentLength") > -1) {
                if (params.ContentLength) delete params.ContentLength
                MediaBucketManager.getPutSignedUrl(bucket, key, expires, params, function ( error, data ) {
                    if (error) {
                        console.log("An error occurred retrieving a signed url for putObject", error);
                    } else {
                        console.log("Retrieved a signed url for putObject:", data);
                        return next(null, data)
                    }
                }); 
            } else {
                return next(error); 
            }
        } else {
            console.log("Retrieved a signed url for putObject:", data);
            return next(null, data);
        }
    });
};

So, below is not entirely correct (it will be correct in some cases but give you the parameter error in others) but might help you get started.

因此,下面并不是完全正确的(在某些情况下是正确的,但在其他情况下会给您参数错误),但是可能会帮助您开始。

Old Answer

旧的答案

It seems (for a signedUrl to PUT a file to S3 where there is only public-read ACL) there are a few headers that will be compared when a request is made to PUT to S3. They are compared against what has been passed to getSignedUrl:

似乎(对于只有公共读ACL的signedUrl来说,将文件放到S3中),在向S3发出请求时,会比较一些头部。它们与传递给getSignedUrl的数据进行了比较:

CacheControl: 'STRING_VALUE',
ContentDisposition: 'STRING_VALUE',
ContentEncoding: 'STRING_VALUE',
ContentLanguage: 'STRING_VALUE',
ContentLength: 0,
ContentMD5: 'STRING_VALUE',
ContentType: 'STRING_VALUE',
Expires: new Date || 'Wed De...'

see the full list here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

在这里查看完整的列表:http://docs.aws.amazon.com/awsjavascriptsdk/latest/aws3.html #putObject-property

When you're calling getSignedUrl you'll pass a 'params' object (fairly clear in the documentation) that includes the Bucket, Key, and Expires data. Here is an (NodeJS) example:

当您调用getSignedUrl时,您将传递一个“params”对象(在文档中相当清楚),包括Bucket、Key和Expires数据。这里有一个(NodeJS)例子:

var params = { Bucket:bucket, Key:key, Expires:expires };
s3.getSignedUrl('putObject', params, function ( error, data ) {
    if (error) {
        // handle error
    } else {
        // handle data
    }
});

Less clear is setting the ACL to 'public-read':

不太清楚的是将ACL设置为“公共读取”:

var params = { Bucket:bucket, Key:key, Expires:expires, ACL:'public-read' };

Very much obscure is the notion of passing headers that you expect the client, using the signed url, will pass along with the PUT operation to S3:

传递消息头的概念非常模糊,您希望客户端使用签名url,并将PUT操作传递给S3:

var params = {
    Bucket:bucket,
    Key:key,
    Expires:expires,
    ACL:'public-read',
    ContentType:'image/png',
    ContentLength:7469
};

In my example above, I have included ContentType and ContentLength because those two headers are included when using XmlHTTPRequest in javascript, and in the case of Content-Length cannot be changed. I suspect that will be the case for other implementations of HTTP requests like Curl and such because they are required headers when submitting HTTP requests that include a body (of data).

在上面的示例中,我包含了ContentType和ContentLength,因为在javascript中使用XmlHTTPRequest时包含了这两个头,并且在内容长度不能更改的情况下。我怀疑其他HTTP请求的实现(如Curl等)也是如此,因为在提交包含(数据)主体的HTTP请求时,它们是必需的头信息。

If the client does not include the ContentType and ContentLength data about the file when requesting a signedUrl, when it comes time to PUT the file to S3 (with that signedUrl), the S3 service will find the headers included with the client's requests (because they are required headers) but the signature will not have included them - and so, they will not match and the operation will fail.

如果客户不包括ContentType ContentLength关于文件的数据请求signedUrl时,时把文件S3(signedUrl),S3服务会发现标题包含客户机的请求(因为他们需要头),但签名不包括他们,所以,他们会不匹配,操作将会失败。

So, it appears that you will have to know, in advance of making your getSignedUrl call, the content type and content length of the file to be PUT to S3. This wasn't a problem for me because I exposed a REST endpoint to allow our clients to request a signed url just before making the PUT operation to S3. Since the client has access to the file to be submitted (at the moment they are ready to submit), it was a trivial operation for the client to access the file size and type and request a signed url with that data from my endpoint.

因此,在进行getSignedUrl调用之前,您似乎必须知道要放到S3的文件的内容类型和内容长度。这对我来说不是问题,因为我公开了一个REST端点,允许我们的客户端在对S3执行PUT操作之前请求一个签名的url。由于客户端可以访问要提交的文件(在他们准备提交的时候),因此客户端访问文件大小和类型并使用来自我的端点的数据请求一个签名url是一项微不足道的操作。

#2


4  

As per @Reinsbrain request, this is the Node.js version of implementing client side uploads to the server with "public-read" rights.

根据@Reinsbrain的请求,这是节点。js版本的实现客户端上载到服务器的“公共读”权限。

BACKEND (NODE.JS)

后端(node . js)

var AWS = require('aws-sdk');
var AWS_ACCESS_KEY_ID = process.env.S3_ACCESS_KEY;
var AWS_SECRET_ACCESS_KEY = process.env.S3_SECRET;
AWS.config.update({accessKeyId: AWS_ACCESS_KEY_ID, secretAccessKey: AWS_SECRET_ACCESS_KEY});
var s3 = new AWS.S3();
var moment = require('moment');
var S3_BUCKET = process.env.S3_BUCKET;
var crypto = require('crypto');
var POLICY_EXPIRATION_TIME = 10;// change to 10 minute expiry time
var S3_DOMAIN = process.env.S3_DOMAIN;

exports.writePolicy = function (filePath, contentType, maxSize, redirect, callback) {
  var readType = "public-read";

  var expiration = moment().add('m', POLICY_EXPIRATION_TIME);//OPTIONAL: only if you don't want a 15 minute expiry

  var s3Policy = {
    "expiration": expiration,
    "conditions": [
      ["starts-with", "$key", filePath],
      {"bucket": S3_BUCKET},
      {"acl": readType},
      ["content-length-range", 2048, maxSize], //min 2kB to maxSize
      {"redirect": redirect},
      ["starts-with", "$Content-Type", contentType]
    ]
  };

  // stringify and encode the policy
  var stringPolicy = JSON.stringify(s3Policy);
  var base64Policy = Buffer(stringPolicy, "utf-8").toString("base64");

  // sign the base64 encoded policy
  var testbuffer = new Buffer(base64Policy, "utf-8");

  var signature = crypto.createHmac("sha1", AWS_SECRET_ACCESS_KEY)
    .update(testbuffer).digest("base64");

  // build the results object to send to calling function
  var credentials = {
    url: S3_DOMAIN,
    key: filePath,
    AWSAccessKeyId: AWS_ACCESS_KEY_ID,
    acl: readType,
    policy: base64Policy,
    signature: signature,
    redirect: redirect,
    content_type: contentType,
    expiration: expiration
  };

  callback(null, credentials);
}

FRONTEND assuming the values from server are in input fields and that you're submitting images via a form submission (i.e. POST since I couldn't get PUT to work):

FRONTEND假设来自服务器的值位于输入字段中,并且您通过表单提交提交图片(例如,POST,因为我无法提交):

function dataURItoBlob(dataURI, contentType) {
  var binary = atob(dataURI.split(',')[1]);
  var array = [];
  for(var i = 0; i < binary.length; i++) {
    array.push(binary.charCodeAt(i));
  }
  return new Blob([new Uint8Array(array)], {type: contentType});
}

function submitS3(callback) {
  var base64Data = $("#file").val();//your file to upload e.g. img.toDataURL("image/jpeg")
  var contentType = $("#contentType").val();
  var xmlhttp = new XMLHttpRequest();
  var blobData = dataURItoBlob(base64Data, contentType);

  var fd = new FormData();
  fd.append('key', $("#key").val());
  fd.append('acl', $("#acl").val());
  fd.append('Content-Type', contentType);
  fd.append('AWSAccessKeyId', $("#accessKeyId").val());
  fd.append('policy', $("#policy").val());
  fd.append('signature', $("#signature").val());
  fd.append("redirect", $("#redirect").val());
  fd.append("file", blobData);

  xmlhttp.onreadystatechange=function(){
    if (xmlhttp.readyState==4) {
      //do whatever you want on completion
      callback();
    }
  }
  var someBucket = "your_bucket_name"
  var S3_DOMAIN = "https://"+someBucket+".s3.amazonaws.com/";
  xmlhttp.open('POST', S3_DOMAIN, true);
  xmlhttp.send(fd);
}

Note: I was uploading more than 1 image per submission so I added multiple iframes (with the FRONTEND code above) to do simultaneous multi-image uploads.

注意:我每次上传超过一个图像,所以我添加了多个iframe(上面的前端代码)来同时进行多图像上传。

#3


3  

step 1: Set s3 policy:

步骤1:设置s3策略:

{
    "expiration": "2040-01-01T00:00:00Z",
    "conditions": [
                    {"bucket": "S3_BUCKET_NAME"},
                    ["starts-with","$key",""],
                    {"acl": "public-read"},
                    ["starts-with","$Content-Type",""],
                    ["content-length-range",0,524288000]
                  ]
}

step 2: prepare aws keys,policy,signature, in this example, all stored at s3_tokens dictionary

步骤2:准备aws键、策略、签名,在本例中,所有这些都存储在s3_token字典中

the trick here is in the policy & signature policy: 1) save step 1 policy in a file. dump it to a json file. 2) base 64 encoded json file (s3_policy_json):

这里的诀窍在于策略和签名策略:1)将步骤1策略保存到文件中。将其转储到json文件中。2)base 64编码json文件(s3_policy_json):

#python
policy = base64.b64encode(s3_policy_json)

signature:

签名:

#python
s3_tokens_dict['signature'] = base64.b64encode(hmac.new(AWS_SECRET_ACCESS_KEY, policy, hashlib.sha1).digest())

step 3: from your js

第三步:从你的js

$scope.upload_file = function(file_to_upload,is_video) {
    var file = file_to_upload;
    var key = $scope.get_file_key(file.name,is_video);
    var filepath = null;
    if ($scope.s3_tokens['use_s3'] == 1){
       var fd = new FormData();
       fd.append('key', key);
       fd.append('acl', 'public-read'); 
       fd.append('Content-Type', file.type);      
       fd.append('AWSAccessKeyId', $scope.s3_tokens['aws_key_id']);
       fd.append('policy', $scope.s3_tokens['policy']);
       fd.append('signature',$scope.s3_tokens['signature']);
       fd.append("file",file);
       var xhr = new XMLHttpRequest();
       var target_url = 'http://s3.amazonaws.com/<bucket>/';
       target_url = target_url.replace('<bucket>',$scope.s3_tokens['bucket_name']);
       xhr.open('POST', target_url, false); //MUST BE LAST LINE BEFORE YOU SEND 
       var res = xhr.send(fd);
       filepath = target_url.concat(key);
    }
    return filepath;
};

#4


1  

You can in fact use getSignedURL as you specified above. Here's an example on how to both get a URL to read from S3, and also use getSignedURL for posting to S3. The files get uploaded with the same permissions as the IAM user that was used to generate the URLs. The problems you are noticing may be a function of how you are testing with curl? I uploaded from my iOS app using AFNetworking (AFHTTPSessionManager uploadTaskWithRequest). Here's an example on how to post using the signed URL: http://pulkitgoyal.in/uploading-objects-amazon-s3-pre-signed-urls/

您实际上可以使用getSignedURL。这里有一个示例,说明如何获取要从S3读取的URL,以及如何使用getSignedURL发布到S3。这些文件与用于生成url的IAM用户具有相同的权限。您注意到的问题可能是您如何使用curl进行测试的函数?我使用AFNetworking从iOS应用程序上传(AFHTTPSessionManager uploadTaskWithRequest)。这里有一个关于如何使用签名URL的示例:http://pulkitgoyal.in/uploading-objects-amaz-s3 -pre-signe -urls/

var s3 = new AWS.S3();  // Assumes you have your credentials and region loaded correctly.

This is for reading from S3. URL will work for 60 seconds.

这是为了阅读S3。URL将工作60秒。

var params = {Bucket: 'mys3bucket', Key: 'file for temp access.jpg', Expires: 60};
var url = s3.getSignedUrl('getObject', params, function (err, url) {
          if (url) console.log("The URL is", url);
       });

This is for writing to S3. URL will work for 60 seconds.

这是写给S3的。URL将工作60秒。

        var key = "file to give temp permission to write.jpg";
        var params = {
            Bucket: 'yours3bucket',
            Key: key,
            ContentType: mime.lookup(key),      // This uses the Node mime library
            Body: '',
            ACL: 'private',
            Expires: 60
        };
        var surl = s3.getSignedUrl('putObject', params, function(err, surl) {
            if (!err) {
                console.log("signed url: " + surl);
            } else {
                console.log("Error signing url " + err);
            }
        });

#5


0  

It sounds like you don't really need a signed URL, just that you want your uploads to be publicly viewable. If that's the case, you just need to go to the AWS console, choose the bucket you want to configure, and click on permissions. Then click the button that says 'add bucket policy' and input the following rule:

听起来你并不需要一个签名的URL,只是希望你的上传可以被公开查看。如果是这样,您只需要转到AWS控制台,选择要配置的bucket,并单击权限。然后点击“添加桶策略”按钮,输入以下规则:

{
    "Version": "2008-10-17",
    "Id": "http referer policy example",
    "Statement": [
        {
            "Sid": "readonly policy",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::BUCKETNAME/*"
        }
    ]
}

where BUCKETNAME should be replaced with your own bucket's name. The contents of that bucket will be readable by anyone now, provided they have a direct link to a specific file.

应该用自己的桶名替换BUCKETNAME。现在任何人都可以阅读这个bucket的内容,只要他们有到特定文件的直接链接。

#6


0  

Could you just upload using your PUT pre signed URL without worrying about permissions, but immediately create another pre signed URL with a GET method and infinite expiration, and provide that to the viewing public?

您是否可以使用您的PUT预签名URL进行上载,而不必担心权限问题,而是立即使用GET方法和无限过期创建另一个预签名URL,并将其提供给查看公共?

#7


-1  

Are you using the official AWS Node.js SDK? http://aws.amazon.com/sdkfornodejs/

您使用的是官方的AWS节点吗?js SDK吗?http://aws.amazon.com/sdkfornodejs/

Here's how I'm using it...

下面是我如何使用它……

 var data = {
        Bucket: "bucket-xyz",
        Key: "uploads/" + filename,
        Body: buffer,
        ACL: "public-read",
        ContentType: mime.lookup(filename)
    };
 s3.putObject(data, callback);

And My uploaded files are public readable. Hope it helps.

我上传的文件是公开可读的。希望它可以帮助。

#1


10  

Update

更新

I have bad news. According to release notes of SDK 2.1.6 at http://aws.amazon.com/releasenotes/1473534964062833:

我有坏消息。根据SDK 2.1.6的发布说明,http://aws.amazon.com/releasenotes/1473534964062833:

"The SDK will now throw an error if ContentLength is passed into an 
Amazon S3 presigned URL (AWS.S3.getSignedUrl()). Passing a 
ContentLength is not supported by the SDK, since it is not enforced on 
S3's side given the way the SDK is currently generating these URLs. 
See GitHub issue #457."

I have found on some occassions, ContentLength must be included (specifically if your client passes it so the signatures will match), then on other occassions, getSignedUrl will complain if you include ContentLength with a parameter error: "contentlength is not supported in presigned urls". I noticed that the behavior would change when I changed the machine which was making the call. Presumably the other machine made a connection to another Amazon server in the farm.

我发现在某些情况下,必须包含ContentLength(特别是如果您的客户端通过它,所以签名会匹配),那么在其他的occassions中,getSignedUrl会抱怨如果您包含了一个参数错误的ContentLength:“在预先签名的url中不支持ContentLength”。我注意到当我改变正在打电话的机器时,行为会发生变化。假设另一台机器与农场中的另一台Amazon服务器建立了连接。

I can only guess why the behavior exists in some cases, but not in others. Perhaps not all of Amazon's servers have been fully upgraded? In either case, to handle this problem, I now make an attempt using ContentLength and if it gives me the parameter error, then I call the getSignedUrl again without it. This is a work-around to deal with this strange behavior with the SDK.

我只能猜测为什么这种行为在某些情况下存在,而在其他情况下不存在。也许并不是亚马逊所有的服务器都已经完全升级了?在任何一种情况下,为了处理这个问题,我现在尝试使用ContentLength,如果它给了我参数错误,那么我将再次调用getSignedUrl。这是用SDK处理这种奇怪行为的一种变通方法。

A little example... not very pretty to look at but you get the idea:

一个小例子…看起来不太漂亮,但你能想到:

MediaBucketManager.getPutSignedUrl = function ( params, next ) {
    var _self = this;
    _self._s3.getSignedUrl('putObject', params, function ( error, data ) {
        if (error) {
            console.log("An error occurred retrieving a signed url for putObject", error);
            // TODO: build contextual error
            if (error.code == "UnexpectedParameter" && error.message.search("ContentLength") > -1) {
                if (params.ContentLength) delete params.ContentLength
                MediaBucketManager.getPutSignedUrl(bucket, key, expires, params, function ( error, data ) {
                    if (error) {
                        console.log("An error occurred retrieving a signed url for putObject", error);
                    } else {
                        console.log("Retrieved a signed url for putObject:", data);
                        return next(null, data)
                    }
                }); 
            } else {
                return next(error); 
            }
        } else {
            console.log("Retrieved a signed url for putObject:", data);
            return next(null, data);
        }
    });
};

So, below is not entirely correct (it will be correct in some cases but give you the parameter error in others) but might help you get started.

因此,下面并不是完全正确的(在某些情况下是正确的,但在其他情况下会给您参数错误),但是可能会帮助您开始。

Old Answer

旧的答案

It seems (for a signedUrl to PUT a file to S3 where there is only public-read ACL) there are a few headers that will be compared when a request is made to PUT to S3. They are compared against what has been passed to getSignedUrl:

似乎(对于只有公共读ACL的signedUrl来说,将文件放到S3中),在向S3发出请求时,会比较一些头部。它们与传递给getSignedUrl的数据进行了比较:

CacheControl: 'STRING_VALUE',
ContentDisposition: 'STRING_VALUE',
ContentEncoding: 'STRING_VALUE',
ContentLanguage: 'STRING_VALUE',
ContentLength: 0,
ContentMD5: 'STRING_VALUE',
ContentType: 'STRING_VALUE',
Expires: new Date || 'Wed De...'

see the full list here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

在这里查看完整的列表:http://docs.aws.amazon.com/awsjavascriptsdk/latest/aws3.html #putObject-property

When you're calling getSignedUrl you'll pass a 'params' object (fairly clear in the documentation) that includes the Bucket, Key, and Expires data. Here is an (NodeJS) example:

当您调用getSignedUrl时,您将传递一个“params”对象(在文档中相当清楚),包括Bucket、Key和Expires数据。这里有一个(NodeJS)例子:

var params = { Bucket:bucket, Key:key, Expires:expires };
s3.getSignedUrl('putObject', params, function ( error, data ) {
    if (error) {
        // handle error
    } else {
        // handle data
    }
});

Less clear is setting the ACL to 'public-read':

不太清楚的是将ACL设置为“公共读取”:

var params = { Bucket:bucket, Key:key, Expires:expires, ACL:'public-read' };

Very much obscure is the notion of passing headers that you expect the client, using the signed url, will pass along with the PUT operation to S3:

传递消息头的概念非常模糊,您希望客户端使用签名url,并将PUT操作传递给S3:

var params = {
    Bucket:bucket,
    Key:key,
    Expires:expires,
    ACL:'public-read',
    ContentType:'image/png',
    ContentLength:7469
};

In my example above, I have included ContentType and ContentLength because those two headers are included when using XmlHTTPRequest in javascript, and in the case of Content-Length cannot be changed. I suspect that will be the case for other implementations of HTTP requests like Curl and such because they are required headers when submitting HTTP requests that include a body (of data).

在上面的示例中,我包含了ContentType和ContentLength,因为在javascript中使用XmlHTTPRequest时包含了这两个头,并且在内容长度不能更改的情况下。我怀疑其他HTTP请求的实现(如Curl等)也是如此,因为在提交包含(数据)主体的HTTP请求时,它们是必需的头信息。

If the client does not include the ContentType and ContentLength data about the file when requesting a signedUrl, when it comes time to PUT the file to S3 (with that signedUrl), the S3 service will find the headers included with the client's requests (because they are required headers) but the signature will not have included them - and so, they will not match and the operation will fail.

如果客户不包括ContentType ContentLength关于文件的数据请求signedUrl时,时把文件S3(signedUrl),S3服务会发现标题包含客户机的请求(因为他们需要头),但签名不包括他们,所以,他们会不匹配,操作将会失败。

So, it appears that you will have to know, in advance of making your getSignedUrl call, the content type and content length of the file to be PUT to S3. This wasn't a problem for me because I exposed a REST endpoint to allow our clients to request a signed url just before making the PUT operation to S3. Since the client has access to the file to be submitted (at the moment they are ready to submit), it was a trivial operation for the client to access the file size and type and request a signed url with that data from my endpoint.

因此,在进行getSignedUrl调用之前,您似乎必须知道要放到S3的文件的内容类型和内容长度。这对我来说不是问题,因为我公开了一个REST端点,允许我们的客户端在对S3执行PUT操作之前请求一个签名的url。由于客户端可以访问要提交的文件(在他们准备提交的时候),因此客户端访问文件大小和类型并使用来自我的端点的数据请求一个签名url是一项微不足道的操作。

#2


4  

As per @Reinsbrain request, this is the Node.js version of implementing client side uploads to the server with "public-read" rights.

根据@Reinsbrain的请求,这是节点。js版本的实现客户端上载到服务器的“公共读”权限。

BACKEND (NODE.JS)

后端(node . js)

var AWS = require('aws-sdk');
var AWS_ACCESS_KEY_ID = process.env.S3_ACCESS_KEY;
var AWS_SECRET_ACCESS_KEY = process.env.S3_SECRET;
AWS.config.update({accessKeyId: AWS_ACCESS_KEY_ID, secretAccessKey: AWS_SECRET_ACCESS_KEY});
var s3 = new AWS.S3();
var moment = require('moment');
var S3_BUCKET = process.env.S3_BUCKET;
var crypto = require('crypto');
var POLICY_EXPIRATION_TIME = 10;// change to 10 minute expiry time
var S3_DOMAIN = process.env.S3_DOMAIN;

exports.writePolicy = function (filePath, contentType, maxSize, redirect, callback) {
  var readType = "public-read";

  var expiration = moment().add('m', POLICY_EXPIRATION_TIME);//OPTIONAL: only if you don't want a 15 minute expiry

  var s3Policy = {
    "expiration": expiration,
    "conditions": [
      ["starts-with", "$key", filePath],
      {"bucket": S3_BUCKET},
      {"acl": readType},
      ["content-length-range", 2048, maxSize], //min 2kB to maxSize
      {"redirect": redirect},
      ["starts-with", "$Content-Type", contentType]
    ]
  };

  // stringify and encode the policy
  var stringPolicy = JSON.stringify(s3Policy);
  var base64Policy = Buffer(stringPolicy, "utf-8").toString("base64");

  // sign the base64 encoded policy
  var testbuffer = new Buffer(base64Policy, "utf-8");

  var signature = crypto.createHmac("sha1", AWS_SECRET_ACCESS_KEY)
    .update(testbuffer).digest("base64");

  // build the results object to send to calling function
  var credentials = {
    url: S3_DOMAIN,
    key: filePath,
    AWSAccessKeyId: AWS_ACCESS_KEY_ID,
    acl: readType,
    policy: base64Policy,
    signature: signature,
    redirect: redirect,
    content_type: contentType,
    expiration: expiration
  };

  callback(null, credentials);
}

FRONTEND assuming the values from server are in input fields and that you're submitting images via a form submission (i.e. POST since I couldn't get PUT to work):

FRONTEND假设来自服务器的值位于输入字段中,并且您通过表单提交提交图片(例如,POST,因为我无法提交):

function dataURItoBlob(dataURI, contentType) {
  var binary = atob(dataURI.split(',')[1]);
  var array = [];
  for(var i = 0; i < binary.length; i++) {
    array.push(binary.charCodeAt(i));
  }
  return new Blob([new Uint8Array(array)], {type: contentType});
}

function submitS3(callback) {
  var base64Data = $("#file").val();//your file to upload e.g. img.toDataURL("image/jpeg")
  var contentType = $("#contentType").val();
  var xmlhttp = new XMLHttpRequest();
  var blobData = dataURItoBlob(base64Data, contentType);

  var fd = new FormData();
  fd.append('key', $("#key").val());
  fd.append('acl', $("#acl").val());
  fd.append('Content-Type', contentType);
  fd.append('AWSAccessKeyId', $("#accessKeyId").val());
  fd.append('policy', $("#policy").val());
  fd.append('signature', $("#signature").val());
  fd.append("redirect", $("#redirect").val());
  fd.append("file", blobData);

  xmlhttp.onreadystatechange=function(){
    if (xmlhttp.readyState==4) {
      //do whatever you want on completion
      callback();
    }
  }
  var someBucket = "your_bucket_name"
  var S3_DOMAIN = "https://"+someBucket+".s3.amazonaws.com/";
  xmlhttp.open('POST', S3_DOMAIN, true);
  xmlhttp.send(fd);
}

Note: I was uploading more than 1 image per submission so I added multiple iframes (with the FRONTEND code above) to do simultaneous multi-image uploads.

注意:我每次上传超过一个图像,所以我添加了多个iframe(上面的前端代码)来同时进行多图像上传。

#3


3  

step 1: Set s3 policy:

步骤1:设置s3策略:

{
    "expiration": "2040-01-01T00:00:00Z",
    "conditions": [
                    {"bucket": "S3_BUCKET_NAME"},
                    ["starts-with","$key",""],
                    {"acl": "public-read"},
                    ["starts-with","$Content-Type",""],
                    ["content-length-range",0,524288000]
                  ]
}

step 2: prepare aws keys,policy,signature, in this example, all stored at s3_tokens dictionary

步骤2:准备aws键、策略、签名,在本例中,所有这些都存储在s3_token字典中

the trick here is in the policy & signature policy: 1) save step 1 policy in a file. dump it to a json file. 2) base 64 encoded json file (s3_policy_json):

这里的诀窍在于策略和签名策略:1)将步骤1策略保存到文件中。将其转储到json文件中。2)base 64编码json文件(s3_policy_json):

#python
policy = base64.b64encode(s3_policy_json)

signature:

签名:

#python
s3_tokens_dict['signature'] = base64.b64encode(hmac.new(AWS_SECRET_ACCESS_KEY, policy, hashlib.sha1).digest())

step 3: from your js

第三步:从你的js

$scope.upload_file = function(file_to_upload,is_video) {
    var file = file_to_upload;
    var key = $scope.get_file_key(file.name,is_video);
    var filepath = null;
    if ($scope.s3_tokens['use_s3'] == 1){
       var fd = new FormData();
       fd.append('key', key);
       fd.append('acl', 'public-read'); 
       fd.append('Content-Type', file.type);      
       fd.append('AWSAccessKeyId', $scope.s3_tokens['aws_key_id']);
       fd.append('policy', $scope.s3_tokens['policy']);
       fd.append('signature',$scope.s3_tokens['signature']);
       fd.append("file",file);
       var xhr = new XMLHttpRequest();
       var target_url = 'http://s3.amazonaws.com/<bucket>/';
       target_url = target_url.replace('<bucket>',$scope.s3_tokens['bucket_name']);
       xhr.open('POST', target_url, false); //MUST BE LAST LINE BEFORE YOU SEND 
       var res = xhr.send(fd);
       filepath = target_url.concat(key);
    }
    return filepath;
};

#4


1  

You can in fact use getSignedURL as you specified above. Here's an example on how to both get a URL to read from S3, and also use getSignedURL for posting to S3. The files get uploaded with the same permissions as the IAM user that was used to generate the URLs. The problems you are noticing may be a function of how you are testing with curl? I uploaded from my iOS app using AFNetworking (AFHTTPSessionManager uploadTaskWithRequest). Here's an example on how to post using the signed URL: http://pulkitgoyal.in/uploading-objects-amazon-s3-pre-signed-urls/

您实际上可以使用getSignedURL。这里有一个示例,说明如何获取要从S3读取的URL,以及如何使用getSignedURL发布到S3。这些文件与用于生成url的IAM用户具有相同的权限。您注意到的问题可能是您如何使用curl进行测试的函数?我使用AFNetworking从iOS应用程序上传(AFHTTPSessionManager uploadTaskWithRequest)。这里有一个关于如何使用签名URL的示例:http://pulkitgoyal.in/uploading-objects-amaz-s3 -pre-signe -urls/

var s3 = new AWS.S3();  // Assumes you have your credentials and region loaded correctly.

This is for reading from S3. URL will work for 60 seconds.

这是为了阅读S3。URL将工作60秒。

var params = {Bucket: 'mys3bucket', Key: 'file for temp access.jpg', Expires: 60};
var url = s3.getSignedUrl('getObject', params, function (err, url) {
          if (url) console.log("The URL is", url);
       });

This is for writing to S3. URL will work for 60 seconds.

这是写给S3的。URL将工作60秒。

        var key = "file to give temp permission to write.jpg";
        var params = {
            Bucket: 'yours3bucket',
            Key: key,
            ContentType: mime.lookup(key),      // This uses the Node mime library
            Body: '',
            ACL: 'private',
            Expires: 60
        };
        var surl = s3.getSignedUrl('putObject', params, function(err, surl) {
            if (!err) {
                console.log("signed url: " + surl);
            } else {
                console.log("Error signing url " + err);
            }
        });

#5


0  

It sounds like you don't really need a signed URL, just that you want your uploads to be publicly viewable. If that's the case, you just need to go to the AWS console, choose the bucket you want to configure, and click on permissions. Then click the button that says 'add bucket policy' and input the following rule:

听起来你并不需要一个签名的URL,只是希望你的上传可以被公开查看。如果是这样,您只需要转到AWS控制台,选择要配置的bucket,并单击权限。然后点击“添加桶策略”按钮,输入以下规则:

{
    "Version": "2008-10-17",
    "Id": "http referer policy example",
    "Statement": [
        {
            "Sid": "readonly policy",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::BUCKETNAME/*"
        }
    ]
}

where BUCKETNAME should be replaced with your own bucket's name. The contents of that bucket will be readable by anyone now, provided they have a direct link to a specific file.

应该用自己的桶名替换BUCKETNAME。现在任何人都可以阅读这个bucket的内容,只要他们有到特定文件的直接链接。

#6


0  

Could you just upload using your PUT pre signed URL without worrying about permissions, but immediately create another pre signed URL with a GET method and infinite expiration, and provide that to the viewing public?

您是否可以使用您的PUT预签名URL进行上载,而不必担心权限问题,而是立即使用GET方法和无限过期创建另一个预签名URL,并将其提供给查看公共?

#7


-1  

Are you using the official AWS Node.js SDK? http://aws.amazon.com/sdkfornodejs/

您使用的是官方的AWS节点吗?js SDK吗?http://aws.amazon.com/sdkfornodejs/

Here's how I'm using it...

下面是我如何使用它……

 var data = {
        Bucket: "bucket-xyz",
        Key: "uploads/" + filename,
        Body: buffer,
        ACL: "public-read",
        ContentType: mime.lookup(filename)
    };
 s3.putObject(data, callback);

And My uploaded files are public readable. Hope it helps.

我上传的文件是公开可读的。希望它可以帮助。