Subscribe to Code Monkey
Subscribe to Code Monkey
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
Prereq knowledge:
A basic understanding of the following will help:
Basically, using your Amazon security credentials, you precreate a url for a specific file that you expect to store in a specific bucket. Once you have created that url correctly, you can simply use it as the destination endpoint and attach the file in the body of your request. Ref here.
There are various ways to upload files directly from the browser to S3. You need to ‘sign’ what you are sending using your Amazon id key and secret key — but obviously you don’t want to expose them in client side javascript.
If you search you can find plenty of examples of simply requesting your credentials from your server (over https of course) and then using them in the client to sign the request client side, and then upload the file. This is one such example. 81 lines of code! It also seems quite complicated too!
Read on for a MUCH easier solution!

you can skip this bit if you want, I won’t mind — it’s not my code! ;)
The template as the start is just a standard file input to select a file and a submit button that will call the Upload method.
In the constructor we inject the uploadService (which is poorly named, because its not an upload service but actually a service that simply returns your S3 sig and policy — which are stored in the local variables declared at lines 19 and 20).
When the component initializes, on line 17 the call is made to the server to get the policy from your server. The response is handled on line 27 to populate the local variables.
Now comes the icky part:

Line 41 builds a timestamp (as S3 requires that requests are stamped) and then line 53 is the upload method which will build the form with all the required data for S3 (ugh!) and finally attach the file at line 69.
If you wanted to you could also attach the file as a Blob64 encoded string, but then you’d need even more code to convert the file first. Kinda like this.
The above example uses a standard multi-part form post to amazon at line 71 — but notice that is NOT a presigned URL It is simply creating a policy and signature, then POSTing the form data to the bucket. IMHO an easier way is to just use presigned urls. NOTE these MUST use a PUT — so you cannot post a form like above!
Since you have to call your server anyway to access your (secure) S3 credentials, I figured why not just take the easier route? (excuse the pun)
/products/product-form.component.ts
...
uploadToS3(presignedUrl: string){
this._productService.putFileToS3(this.selectedFile, presignedUrl).subscribe(
response => console.log(response));
}When uploadToS3 is called, it subscribes to a method in my ProductService (where all my methods for talking to the Product endpoints live, as per a normal Angular service setup). Note here that I am just logging the response for simplicity — but actually S3 will return a 200 with no content. So you only need to handle any error returned (which you could do in the putFileToS3 method below).
And here is that method in my ProductService class:
services/product.service.ts
...
@Injectable()
export class ProductService {
constructor(
private http: Http
) {}
...
putFileToS3(body: File, presignedUrl: string){
const headers = new Headers({'Content-Type': 'image/jpeg'});
const options = new RequestOptions({ headers: headers });
return this.http.put(presignedUrl, body, options).map(
(response: Response) => response.json()
)
}Note that I have a LOT less code than the first example.
Not at all. Notice that my uploadToS3 is expecting a presignedUrl…so where is that coming from then?
When the user selects a file and clicks the upload button, I expect them to also have filled out all other product attributes and save the product to my server. So my form has a file input plus all the other inputs needed. Upon clicking submit, the following method fires and posts ALL the product data (but NOT the file or even the filename or type — but you could if you wanted to). Everything about the file remains on the client.
products/my-product.component.ts
createOnApi() {
this._productService.postProduct(this.myForm.value).subscribe(
data => {
this.successToast('Product was created');
this.uploadToS3(data.presigned_url);
this._router.navigate([`/pages/products/${data.id}`]);
},
error => {
this.errorToast('Product was not created');
}
);
}My API saves the Product and in the response we get back the presigned_url as an attribute on the Product class, which we then pass along to the uploadToS3 to transfer the actual file to S3. Here is the Product class:
models/product.ts
export class Product{
constructor(
public id: number,
...
) {}
public presigned_url: string
}Meanwhile we navigate to the page for the newly created product. By the time that page loads, S3 will already have the file (unless you are loading stupidly large images of course!).
Not at all. The Product class in Rails includes a concern which takes care of creating the presigned_url as per the S3 docs I linked at the start.
models/product.rb
class Product < ApplicationRecord
include Amazon
attr_accessor :presigned_url
endNotice presigned_url is not stored as part of the model. I use it once to upload the file from the client (and anyway by default they expire after 15 minutes). See below for what IS persisted in the database for the image.
Finally, here is the magic that creates this magic S3 presigned url:
models/concerns/amazon.rb
require 'aws-sdk'
module Amazon
extend ActiveSupport::Concern
included do
after_save :create_presigned_url
end
def create_presigned_url
filename = "#{self.id}.jpg"
Aws.config[:credentials]=Aws::Credentials.new(
Rails.application.secrets.aws_access_key_id,
Rails.application.secrets.aws_secret_access_key)
s3 = Aws::S3::Resource.new(region:'eu-west-2')
bucket = Rails.application.secrets.s3_bucket_name.to_s
obj = s3.bucket(bucket).object(filename)
self.presigned_url = obj.presigned_url(:put, { acl:'public-read' }) #, expires: 10*60)
self.update_column(:image_url, obj.public_url)
endNote the very last line — this stores the public_url of the image in the database as image_url. When you create a file in an S3 bucket, it will automatically provide this public_url. Since I set acl: ‘public-read’ that allows anyone to see the image.
Obviously this example is simple and only allows for one image per product, but it shows the general idea of how you can get S3, Rails and Angular to play happily together. Without getting bruised hopefully!
Prereq knowledge:
A basic understanding of the following will help:
Basically, using your Amazon security credentials, you precreate a url for a specific file that you expect to store in a specific bucket. Once you have created that url correctly, you can simply use it as the destination endpoint and attach the file in the body of your request. Ref here.
There are various ways to upload files directly from the browser to S3. You need to ‘sign’ what you are sending using your Amazon id key and secret key — but obviously you don’t want to expose them in client side javascript.
If you search you can find plenty of examples of simply requesting your credentials from your server (over https of course) and then using them in the client to sign the request client side, and then upload the file. This is one such example. 81 lines of code! It also seems quite complicated too!
Read on for a MUCH easier solution!

you can skip this bit if you want, I won’t mind — it’s not my code! ;)
The template as the start is just a standard file input to select a file and a submit button that will call the Upload method.
In the constructor we inject the uploadService (which is poorly named, because its not an upload service but actually a service that simply returns your S3 sig and policy — which are stored in the local variables declared at lines 19 and 20).
When the component initializes, on line 17 the call is made to the server to get the policy from your server. The response is handled on line 27 to populate the local variables.
Now comes the icky part:

Line 41 builds a timestamp (as S3 requires that requests are stamped) and then line 53 is the upload method which will build the form with all the required data for S3 (ugh!) and finally attach the file at line 69.
If you wanted to you could also attach the file as a Blob64 encoded string, but then you’d need even more code to convert the file first. Kinda like this.
The above example uses a standard multi-part form post to amazon at line 71 — but notice that is NOT a presigned URL It is simply creating a policy and signature, then POSTing the form data to the bucket. IMHO an easier way is to just use presigned urls. NOTE these MUST use a PUT — so you cannot post a form like above!
Since you have to call your server anyway to access your (secure) S3 credentials, I figured why not just take the easier route? (excuse the pun)
/products/product-form.component.ts
...
uploadToS3(presignedUrl: string){
this._productService.putFileToS3(this.selectedFile, presignedUrl).subscribe(
response => console.log(response));
}When uploadToS3 is called, it subscribes to a method in my ProductService (where all my methods for talking to the Product endpoints live, as per a normal Angular service setup). Note here that I am just logging the response for simplicity — but actually S3 will return a 200 with no content. So you only need to handle any error returned (which you could do in the putFileToS3 method below).
And here is that method in my ProductService class:
services/product.service.ts
...
@Injectable()
export class ProductService {
constructor(
private http: Http
) {}
...
putFileToS3(body: File, presignedUrl: string){
const headers = new Headers({'Content-Type': 'image/jpeg'});
const options = new RequestOptions({ headers: headers });
return this.http.put(presignedUrl, body, options).map(
(response: Response) => response.json()
)
}Note that I have a LOT less code than the first example.
Not at all. Notice that my uploadToS3 is expecting a presignedUrl…so where is that coming from then?
When the user selects a file and clicks the upload button, I expect them to also have filled out all other product attributes and save the product to my server. So my form has a file input plus all the other inputs needed. Upon clicking submit, the following method fires and posts ALL the product data (but NOT the file or even the filename or type — but you could if you wanted to). Everything about the file remains on the client.
products/my-product.component.ts
createOnApi() {
this._productService.postProduct(this.myForm.value).subscribe(
data => {
this.successToast('Product was created');
this.uploadToS3(data.presigned_url);
this._router.navigate([`/pages/products/${data.id}`]);
},
error => {
this.errorToast('Product was not created');
}
);
}My API saves the Product and in the response we get back the presigned_url as an attribute on the Product class, which we then pass along to the uploadToS3 to transfer the actual file to S3. Here is the Product class:
models/product.ts
export class Product{
constructor(
public id: number,
...
) {}
public presigned_url: string
}Meanwhile we navigate to the page for the newly created product. By the time that page loads, S3 will already have the file (unless you are loading stupidly large images of course!).
Not at all. The Product class in Rails includes a concern which takes care of creating the presigned_url as per the S3 docs I linked at the start.
models/product.rb
class Product < ApplicationRecord
include Amazon
attr_accessor :presigned_url
endNotice presigned_url is not stored as part of the model. I use it once to upload the file from the client (and anyway by default they expire after 15 minutes). See below for what IS persisted in the database for the image.
Finally, here is the magic that creates this magic S3 presigned url:
models/concerns/amazon.rb
require 'aws-sdk'
module Amazon
extend ActiveSupport::Concern
included do
after_save :create_presigned_url
end
def create_presigned_url
filename = "#{self.id}.jpg"
Aws.config[:credentials]=Aws::Credentials.new(
Rails.application.secrets.aws_access_key_id,
Rails.application.secrets.aws_secret_access_key)
s3 = Aws::S3::Resource.new(region:'eu-west-2')
bucket = Rails.application.secrets.s3_bucket_name.to_s
obj = s3.bucket(bucket).object(filename)
self.presigned_url = obj.presigned_url(:put, { acl:'public-read' }) #, expires: 10*60)
self.update_column(:image_url, obj.public_url)
endNote the very last line — this stores the public_url of the image in the database as image_url. When you create a file in an S3 bucket, it will automatically provide this public_url. Since I set acl: ‘public-read’ that allows anyone to see the image.
Obviously this example is simple and only allows for one image per product, but it shows the general idea of how you can get S3, Rails and Angular to play happily together. Without getting bruised hopefully!
No activity yet