Working with S3 using AWS SDK for Java 2.x
Maven See aws/aws-sdk-java-v2 on how to use AWS SDK Java V2.
Import the BOM.
1 2 3 4 5 6 7 8 9 10 11 <dependencyManagement > <dependencies > <dependency > <groupId > software.amazon.awssdk</groupId > <artifactId > bom</artifactId > <version > 2.20.43</version > <type > pom</type > <scope > import</scope > </dependency > </dependencies > </dependencyManagement >
Import s3 dependency
1 2 3 4 5 <dependency > <groupId > software.amazon.awssdk</groupId > <artifactId > s3</artifactId > </dependency >
Create Client You can create S3Client using builder() method.
1 2 3 4 5 public S3Client s3Client () { return S3Client.builder() .region( Region.US_EAST_1) .build(); }
If you don’t provide the credentials provider, The default provider DefaultCredentialsProvider will be used.
S3Client is thread-safe . see S3Client Interface API . It has @ThreadSafe annotation.
To use static credentials when creating S3Client.
1 2 3 4 5 6 7 8 9 public S3Client s3Client () { AwsBasicCredentials awsCreds = AwsBasicCredentials.create( "your_access_key_id" , "your_secret_access_key" ); return S3Client.builder() .credentialsProvider(StaticCredentialsProvider.create(awsCreds)) .region(Region.US_EAST_1) .build(); }
List All Buckets listBuckets method returns a list of all buckets owned by the authenticated sender of the request.
1 2 3 4 5 6 try { ListBucketsResponse response = s3Client.listBuckets(); response.buckets().forEach(System.out::println); } catch (S3Exception exception) { exception.printStackTrace(); }
List Objects in a Bucket 1 2 3 4 5 6 7 8 9 10 11 12 13 try { ListObjectsRequest request = ListObjectsRequest .builder() .bucket("bucket-name" ) .build(); ListObjectsResponse response = s3Client.listObjects(request); List<S3Object> contents = response.contents(); contents.forEach(s3Object -> { System.out.printf("key: %s, size: %d \n" , s3Object.key(), s3Object.size()); }); } catch (S3Exception exception) { exception.printStackTrace(); }
Upload Object putObject method is used to upload object to S3. it taks a PutObjectRequest
and RequestBody
as parameters.
1 2 3 4 5 6 7 8 9 10 try { PutObjectRequest request = PutObjectRequest.builder() .bucket("bucket-name" ) .key("key/to/file" ) .build(); RequestBody requestBody = RequestBody.fromString("{\"name\": \"hello\"}" ); s3Client.putObject(request, requestBody); } catch (S3Exception exception) { exception.printStackTrace(); }
upload with storage class. The default storage class is STANDARD
which is the default storage class for S3. Use INTELLIGENT_TIERING
if you want AWS to automatically move your data to the most cost-effective storage class based on changing access patterns. use STANDARD_IA
for infrequently accessed data. Use GLACIER
for archival storage.
1 2 3 4 5 6 7 8 9 10 11 try { PutObjectRequest request = PutObjectRequest.builder() .bucket("bucket-name" ) .key("key/to/file" ) .storageClass(StorageClass.STANDARD_IA) .build(); RequestBody requestBody = RequestBody.fromString("{\"name\": \"hello\"}" ); s3Client.putObject(request, requestBody); } catch (S3Exception exception) { exception.printStackTrace(); }
Get Object 1 2 3 4 5 6 7 8 9 10 11 12 13 try { GetObjectRequest request = GetObjectRequest.builder() .bucket("bucket-name" ) .key("key/to/file" ) .build(); try (ResponseInputStream<GetObjectResponse> objectAsBytes = s3Client.getObject(request)) { byte [] bytes = objectAsBytes.readAllBytes(); String content = new String (bytes, "UTF-8" ); System.out.println(content); } } catch (S3Exception | IOException exception) { exception.printStackTrace(); }
Delete Object 1 2 3 4 5 6 7 8 9 try { DeleteObjectRequest deleteObjectRequest = DeleteObjectRequest.builder() .bucket("bucket-name" ) .key("key/to/file" ) .build(); s3Client.deleteObject(deleteObjectRequest); } catch (S3Exception e) { System.err.println(e.awsErrorDetails().errorMessage()); }
Close S3Clinet When S3Client is no longer needed, call S3Client.close() method to close S3Client.
Reference