├── doc_source ├── glossary.md ├── API_Reference.md ├── streaming-video-kinesis-output-reference-streamprocessorinformation.md ├── vulnerability-analysis-and-management.md ├── streaming-video-kinesis-output-reference-inputinformation.md ├── labels-detecting-custom-labels.md ├── security-inter-network-privacy.md ├── programming.md ├── streaming-video-kinesis-output-reference-facematch.md ├── best-practices.md ├── service_code_examples_scenarios.md ├── tutorials.md ├── service_code_examples_cross-service_examples.md ├── operation-latency.md ├── disaster-recovery-resiliency.md ├── streaming-video-kinesis-output-reference-facesearchresponse.md ├── infrastructure-security.md ├── security.md ├── getting-started-console.md ├── recommendations-camera-stored-streaming-video.md ├── streaming-video-kinesis-output-reference-kinesisvideostreams-kinesisvideo.md ├── getting-started.md ├── streaming-video-kinesis-output-reference-detectedface.md ├── security_iam_resource-based-policy-examples.md ├── images.md ├── recommendations-camera-streaming-video.md ├── example_cross_DetectFaces_section.md ├── celebrity-recognition-vs-face-search.md ├── celebrities.md ├── get-started-exercise.md ├── example_cross_DetectLabels_section.md ├── service_code_examples_actions.md ├── how-it-works.md ├── faces.md ├── streaming-video-detect-labels.md ├── video-notification-payload.md ├── collections-streaming.md ├── rekognition-compliance.md ├── recommendations-camera-image-video.md ├── detect-faces-console.md ├── aggregated-metrics.md ├── example_cross_RekognitionPhotoAnalyzerPPE_section.md ├── guidance-face-attributes.md ├── sdk-general-information-section.md ├── data-protection.md ├── example_rekognition_GetCelebrityInfo_section.md ├── compare-faces-console.md ├── face-feature-differences.md ├── recommendations-facial-input-images.md ├── service_code_examples.md ├── example_cross_RekognitionVideoDetection_section.md ├── stored-video-tutorial-v2.md ├── images-information.md ├── face-detection-model.md ├── setup-awscli-sdk.md ├── streaming-video-tagging-stream-processor.md ├── setting-up-your-amazon-rekognition-streaming-video-resources.md └── streaming-video.md ├── .github └── PULL_REQUEST_TEMPLATE.md ├── LICENSE-SUMMARY ├── README.md ├── code_examples ├── python_examples │ ├── image │ │ ├── python-detect-labels.py │ │ ├── python-get-celebrity-info.py │ │ ├── python-create-collection.py │ │ ├── python-detect-labels-local-file.py │ │ ├── python-detect-moderation-labels.py │ │ ├── python-delete-faces-from-collection.py │ │ ├── python-detect-faces.py │ │ ├── python-list-collections.py │ │ ├── python-detect-text.py │ │ ├── python-list-faces-in-collection.py │ │ ├── python-recognize-celebrities.py │ │ ├── python-search-faces-collection.py │ │ ├── python-search-faces-by-image-collection.py │ │ ├── python-delete-collection.py │ │ ├── python-compare-faces.py │ │ ├── python-describe-collection.py │ │ ├── python-add-faces-to-collection.py │ │ └── python-image-orientation-bounding-box.py │ └── README.md ├── dotnet_examples │ ├── image │ │ ├── net-delete-collection.cs │ │ ├── net-delete-faces.cs │ │ ├── net-create-collection.cs │ │ ├── net-celebrity-info.cs │ │ ├── net-search-faces-matching-id.cs │ │ ├── net-list-faces.cs │ │ ├── net-list-collections.cs │ │ ├── net-detect-labels.cs │ │ ├── net-add-faces.cs │ │ ├── net-search-faces-matching-image.cs │ │ ├── net-detect-moderation-labels.cs │ │ ├── net-detect-text.cs │ │ ├── net-detect-labels-local-file.cs │ │ ├── net-celebrities-in-image.cs │ │ ├── net-detect-faces.cs │ │ └── net-compare-faces.cs │ └── README.md ├── java_examples │ ├── image │ │ ├── java-delete-collection.java │ │ ├── java-celebrity-info.java │ │ ├── java-create-collection.java │ │ ├── java-delete-faces-from-collection.java │ │ ├── java-describe-collection.java │ │ ├── java-list-collections.java │ │ ├── java-detect-labels.java │ │ ├── java-list-faces-In-collection.java │ │ ├── java-list-faces-in-collection.java │ │ ├── java-search-face-matching-id-collection.java │ │ ├── java-detect-moderation-labels.java │ │ ├── java-detect-labels-local-file.java │ │ ├── java-detect-text.java │ │ ├── java-search-face-matching-image-collection.java │ │ ├── java-detect-faces.java │ │ ├── java-add-faces-to-collection.java │ │ ├── java-recognize-celebrities.java │ │ └── java-compare-faces.java │ ├── streaming_video │ │ └── java-stream-processor-starter.java │ └── README.md ├── php_examples │ ├── php-detect-labels-local-file.php │ └── README.md └── javascript_examples │ ├── js-estimate-age-snippet.html │ └── README.md └── LICENSE-SAMPLECODE /doc_source/glossary.md: -------------------------------------------------------------------------------- 1 | # AWS glossary 2 | 3 | For the latest AWS terminology, see the [AWS glossary](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html) in the *AWS General Reference*\. -------------------------------------------------------------------------------- /doc_source/API_Reference.md: -------------------------------------------------------------------------------- 1 | # API Reference 2 | 3 | The Amazon Rekognition API reference is now located at [Amazon Rekognition API Reference](https://docs.aws.amazon.com/rekognition/latest/APIReference/Welcome.html)\. -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | *Issue #, if available:* 2 | 3 | *Description of changes:* 4 | 5 | 6 | By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. 7 | -------------------------------------------------------------------------------- /LICENSE-SUMMARY: -------------------------------------------------------------------------------- 1 | Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | The documentation is made available under the Creative Commons Attribution-ShareAlike 4.0 International License. See the LICENSE file. 4 | 5 | The sample code within this documentation is made available under a modified MIT license. See the LICENSE-SAMPLECODE file. 6 | -------------------------------------------------------------------------------- /doc_source/streaming-video-kinesis-output-reference-streamprocessorinformation.md: -------------------------------------------------------------------------------- 1 | # StreamProcessorInformation 2 | 3 | Status information about the stream processor\. 4 | 5 | **Status** 6 | 7 | The current status of the stream processor\. The one possible value is RUNNING\. 8 | 9 | Type: String -------------------------------------------------------------------------------- /doc_source/vulnerability-analysis-and-management.md: -------------------------------------------------------------------------------- 1 | # Configuration and vulnerability analysis in Amazon Rekognition 2 | 3 | Configuration and IT controls are a shared responsibility between AWS and you, our customer\. For more information, see the AWS [shared responsibility model](http://aws.amazon.com/compliance/shared-responsibility-model/)\. -------------------------------------------------------------------------------- /doc_source/streaming-video-kinesis-output-reference-inputinformation.md: -------------------------------------------------------------------------------- 1 | # InputInformation 2 | 3 | Information about a source video stream that's used by Amazon Rekognition Video\. For more information, see [Working with streaming video events](streaming-video.md)\. 4 | 5 | **KinesisVideo** 6 | 7 | Type: [KinesisVideo](streaming-video-kinesis-output-reference-kinesisvideostreams-kinesisvideo.md) object -------------------------------------------------------------------------------- /doc_source/labels-detecting-custom-labels.md: -------------------------------------------------------------------------------- 1 | # Detecting custom labels 2 | 3 | Amazon Rekognition Custom Labels can identify the objects and scenes in images that are specific to your business needs, such as logos or engineering machine parts\. For more information, see [What Is Amazon Rekognition Custom Labels?](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/what-is.html) in the *Amazon Rekognition Custom Labels Developer Guide*\. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Amazon Rekognition Developer Guide 2 | 3 | The open source version of the Amazon Rekognition docs. You can submit feedback & requests for changes by submitting issues in this repo or by making proposed changes & submitting a pull request. 4 | 5 | ## License Summary 6 | 7 | The documentation is made available under the Creative Commons Attribution-ShareAlike 4.0 International License. See the LICENSE file. 8 | 9 | The sample code within this documentation is made available under a modified MIT license. See the LICENSE-SAMPLECODE file. 10 | -------------------------------------------------------------------------------- /doc_source/security-inter-network-privacy.md: -------------------------------------------------------------------------------- 1 | # Internetwork traffic privacy 2 | 3 | An Amazon Virtual Private Cloud \(Amazon VPC\) endpoint for Amazon Rekognition is a logical entity within a VPC that allows connectivity only to Amazon Rekognition\. Amazon VPC routes requests to Amazon Rekognition and routes responses back to the VPC\. For more information, see [VPC Endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html) in the *Amazon VPC User Guide*\. For information about using Amazon VPC endpoints with Amazon Rekognition see [Using Amazon Rekognition with Amazon VPC endpoints](vpc.md)\. -------------------------------------------------------------------------------- /doc_source/programming.md: -------------------------------------------------------------------------------- 1 | # Working with images and videos 2 | 3 | You can use Amazon Rekognition API operations with images, stored videos, and streaming videos\. This section provides general information about writing code that accesses Amazon Rekognition\. Other sections in this guide provide information about specific types of image and video analysis, such as face detection\. 4 | 5 | **Topics** 6 | + [Working with images](images.md) 7 | + [Working with stored video analysis](video.md) 8 | + [Working with streaming video events](streaming-video.md) 9 | + [Error handling](error-handling.md) 10 | + [Using Amazon Rekognition as a FedRAMP authorized service](fedramp.md) -------------------------------------------------------------------------------- /doc_source/streaming-video-kinesis-output-reference-facematch.md: -------------------------------------------------------------------------------- 1 | # MatchedFace 2 | 3 | Information about a face that matches a face detected in an analyzed video frame\. 4 | 5 | **Face** 6 | 7 | Face match information for a face in the input collection that matches the face in the [DetectedFace](streaming-video-kinesis-output-reference-detectedface.md) object\. 8 | 9 | Type: [Face](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_Face.html) object 10 | 11 | **Similarity** 12 | 13 | The level of confidence \(1\-100\) that the faces match\. 1 is the lowest confidence, 100 is the highest\. 14 | 15 | Type: Number -------------------------------------------------------------------------------- /doc_source/best-practices.md: -------------------------------------------------------------------------------- 1 | # Best practices for sensors, input images, and videos 2 | 3 | This section contains best practice information for using Amazon Rekognition\. 4 | 5 | **Topics** 6 | + [Amazon Rekognition Image operation latency](operation-latency.md) 7 | + [Recommendations for facial comparison input images](recommendations-facial-input-images.md) 8 | + [Recommendations for camera setup \(image and video\)](recommendations-camera-image-video.md) 9 | + [Recommendations for camera setup \(stored and streaming video\)](recommendations-camera-stored-streaming-video.md) 10 | + [Recommendations for camera setup \(streaming video\)](recommendations-camera-streaming-video.md) -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-detect-labels.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | fileName='input.jpg' 8 | bucket='bucket' 9 | 10 | client=boto3.client('rekognition') 11 | 12 | response = client.detect_labels(Image={'S3Object':{'Bucket':bucket,'Name':fileName}}) 13 | 14 | print('Detected labels for ' + fileName) 15 | for label in response['Labels']: 16 | print (label['Name'] + ' : ' + str(label['Confidence'])) 17 | 18 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-get-celebrity-info.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | id="nnnnnnnn" 9 | 10 | client=boto3.client('rekognition') 11 | 12 | #Display celebrity info 13 | print('Getting celebrity info for celebrity: ' + id) 14 | response=client.get_celebrity_info(Id=id) 15 | 16 | print (response['Name']) 17 | print ('Further information (if available):') 18 | for url in response['Urls']: 19 | print (url) 20 | 21 | -------------------------------------------------------------------------------- /doc_source/service_code_examples_scenarios.md: -------------------------------------------------------------------------------- 1 | # Scenarios for Amazon Rekognition using AWS SDKs 2 | 3 | The following code examples show you how to implement common scenarios in Amazon Rekognition with AWS SDKs\. These scenarios show you how to accomplish specific tasks by calling multiple functions within Amazon Rekognition\. Each scenario includes a link to GitHub, where you can find instructions on how to set up and run the code\. 4 | 5 | **Topics** 6 | + [Build a collection and find faces in it](example_rekognition_Usage_FindFacesInCollection_section.md) 7 | + [Detect and display elements in images](example_rekognition_Usage_DetectAndDisplayImage_section.md) 8 | + [Detect information in videos](example_rekognition_VideoDetection_section.md) -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-create-collection.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | maxResults=2 9 | collectionId='MyCollection' 10 | 11 | client=boto3.client('rekognition') 12 | 13 | #Create a collection 14 | print('Creating collection:' + collectionId) 15 | response=client.create_collection(CollectionId=collectionId) 16 | print('Collection ARN: ' + response['CollectionArn']) 17 | print('Status code: ' + str(response['StatusCode'])) 18 | print('Done...') 19 | 20 | 21 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-detect-labels-local-file.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | imageFile='input.jpg' 9 | client=boto3.client('rekognition') 10 | 11 | with open(imageFile, 'rb') as image: 12 | response = client.detect_labels(Image={'Bytes': image.read()}) 13 | 14 | print('Detected labels in ' + imageFile) 15 | for label in response['Labels']: 16 | print (label['Name'] + ' : ' + str(label['Confidence'])) 17 | 18 | print('Done...') 19 | 20 | 21 | 22 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-detect-moderation-labels.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | photo='moderate.png' 8 | bucket='bucket' 9 | 10 | client=boto3.client('rekognition') 11 | 12 | response = client.detect_moderation_labels(Image={'S3Object':{'Bucket':bucket,'Name':photo}}) 13 | 14 | print('Detected labels for ' + photo) 15 | for label in response['ModerationLabels']: 16 | print (label['Name'] + ' : ' + str(label['Confidence'])) 17 | print (label['ParentName']) 18 | 19 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-delete-faces-from-collection.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | collectionId='MyCollection' 9 | faces=[] 10 | faces.append("xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx") 11 | 12 | client=boto3.client('rekognition') 13 | 14 | response=client.delete_faces(CollectionId=collectionId, 15 | FaceIds=faces) 16 | 17 | print(str(len(response['DeletedFaces'])) + ' faces deleted:') 18 | for faceId in response['DeletedFaces']: 19 | print (faceId) 20 | -------------------------------------------------------------------------------- /doc_source/tutorials.md: -------------------------------------------------------------------------------- 1 | # Tutorials 2 | 3 | These cross\-service tutorials demonstrate how to use Rekognition's API operations alongside other AWS services to create sample applications and accomplish a variety of tasks\. Most of these tutorials make use of Amazon S3 to store images or video\. Other commonly used services include AWS Lambda\. 4 | 5 | **Topics** 6 | + [Storing Amazon Rekognition Data with Amazon RDS and DynamoDB](storage-tutorial.md) 7 | + [Using Amazon Rekognition and Lambda to tag assets in an Amazon S3 bucket](images-lambda-s3-tutorial.md) 8 | + [Creating AWS video analyzer applications](stored-video-tutorial-v2.md) 9 | + [Creating an Amazon Rekognition Lambda function](stored-video-lambda.md) 10 | + [Using Amazon Rekognition for Identity Verification](identity-verification-tutorial.md) -------------------------------------------------------------------------------- /doc_source/service_code_examples_cross-service_examples.md: -------------------------------------------------------------------------------- 1 | # Cross\-service examples for Amazon Rekognition using AWS SDKs 2 | 3 | The following sample applications use AWS SDKs to combine Amazon Rekognition with other AWS services\. Each example includes a link to GitHub, where you can find instructions on how to set up and run the application\. 4 | 5 | **Topics** 6 | + [Detect PPE in images](example_cross_RekognitionPhotoAnalyzerPPE_section.md) 7 | + [Detect faces in an image](example_cross_DetectFaces_section.md) 8 | + [Detect objects in images](example_cross_RekognitionPhotoAnalyzer_section.md) 9 | + [Detect people and objects in a video](example_cross_RekognitionVideoDetection_section.md) 10 | + [Save EXIF and other image information](example_cross_DetectLabels_section.md) -------------------------------------------------------------------------------- /doc_source/operation-latency.md: -------------------------------------------------------------------------------- 1 | # Amazon Rekognition Image operation latency 2 | 3 | To ensure the lowest possible latency for Amazon Rekognition Image operations, consider the following: 4 | + The Region for the Amazon S3 bucket that contains your images must match the Region you use for Amazon Rekognition Image API operations\. 5 | + Calling an Amazon Rekognition Image operation with image bytes is faster than uploading the image to an Amazon S3 bucket and then referencing the uploaded image in an Amazon Rekognition Image operation\. Consider this approach if you are uploading images to Amazon Rekognition Image for near real\-time processing\. For example, images uploaded from an IP camera or images uploaded through a web portal\. 6 | + If the image is already in an Amazon S3 bucket, referencing it in an Amazon Rekognition Image operation is probably faster than passing image bytes to the operation\. -------------------------------------------------------------------------------- /doc_source/disaster-recovery-resiliency.md: -------------------------------------------------------------------------------- 1 | # Resilience in Amazon Rekognition 2 | 3 | The AWS global infrastructure is built around AWS Regions and Availability Zones\. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low\-latency, high\-throughput, and highly redundant networking\. With Availability Zones, you can design and operate applications and databases that automatically fail over between zones without interruption\. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures\. 4 | 5 | For more information about AWS Regions and Availability Zones, see [AWS Global Infrastructure](http://aws.amazon.com/about-aws/global-infrastructure/)\. 6 | 7 | In addition to the AWS global infrastructure, Amazon Rekognition offers several features to help support your data resiliency and backup needs\. -------------------------------------------------------------------------------- /doc_source/streaming-video-kinesis-output-reference-facesearchresponse.md: -------------------------------------------------------------------------------- 1 | # FaceSearchResponse 2 | 3 | Information about a face detected in a streaming video frame and the faces in a collection that match the detected face\. You specify the collection in a call to [CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html)\. For more information, see [Working with streaming video events](streaming-video.md)\. 4 | 5 | **DetectedFace** 6 | 7 | Face details for a face detected in an analyzed video frame\. 8 | 9 | Type: [DetectedFace](streaming-video-kinesis-output-reference-detectedface.md) object 10 | 11 | **MatchedFaces** 12 | 13 | An array of face details for faces in a collection that matches the face detected in `DetectedFace`\. 14 | 15 | Type: [MatchedFace](streaming-video-kinesis-output-reference-facematch.md) object array -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-detect-faces.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | import json 6 | 7 | if __name__ == "__main__": 8 | photo='input.jpg' 9 | bucket='bucket' 10 | client=boto3.client('rekognition') 11 | 12 | response = client.detect_faces(Image={'S3Object':{'Bucket':bucket,'Name':photo}},Attributes=['ALL']) 13 | 14 | print('Detected faces for ' + photo) 15 | for faceDetail in response['FaceDetails']: 16 | print('The detected face is between ' + str(faceDetail['AgeRange']['Low']) 17 | + ' and ' + str(faceDetail['AgeRange']['High']) + ' years old') 18 | print('Here are the other attributes:') 19 | print(json.dumps(faceDetail, indent=4, sort_keys=True)) 20 | -------------------------------------------------------------------------------- /LICENSE-SAMPLECODE: -------------------------------------------------------------------------------- 1 | Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this 4 | software and associated documentation files (the "Software"), to deal in the Software 5 | without restriction, including without limitation the rights to use, copy, modify, 6 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 7 | permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 10 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 11 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 12 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 13 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 14 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-list-collections.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | maxResults=2 9 | 10 | client=boto3.client('rekognition') 11 | 12 | #Display all the collections 13 | print('Displaying collections...') 14 | response=client.list_collections(MaxResults=maxResults) 15 | 16 | while True: 17 | collections=response['CollectionIds'] 18 | 19 | for collection in collections: 20 | print (collection) 21 | if 'NextToken' in response: 22 | nextToken=response['NextToken'] 23 | response=client.list_collections(NextToken=nextToken,MaxResults=maxResults) 24 | 25 | else: 26 | break 27 | 28 | print('done...') 29 | 30 | -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-delete-collection.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using Amazon.Rekognition; 6 | using Amazon.Rekognition.Model; 7 | 8 | public class DeleteCollection 9 | { 10 | public static void Example() 11 | { 12 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 13 | 14 | String collectionId = "MyCollection"; 15 | Console.WriteLine("Deleting collection: " + collectionId); 16 | 17 | DeleteCollectionRequest deleteCollectionRequest = new DeleteCollectionRequest() 18 | { 19 | CollectionId = collectionId 20 | }; 21 | 22 | DeleteCollectionResponse deleteCollectionResponse = rekognitionClient.DeleteCollection(deleteCollectionRequest); 23 | Console.WriteLine(collectionId + ": " + deleteCollectionResponse.StatusCode); 24 | } 25 | } 26 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-detect-text.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | bucket='bucket' 9 | photo='inputtext.jpg' 10 | 11 | client=boto3.client('rekognition') 12 | 13 | 14 | response=client.detect_text(Image={'S3Object':{'Bucket':bucket,'Name':photo}}) 15 | 16 | 17 | textDetections=response['TextDetections'] 18 | print response 19 | 20 | for text in textDetections: 21 | print 'Detected text:' + text['DetectedText'] 22 | print 'Confidence: ' + "{:.2f}".format(text['Confidence']) + "%" 23 | print 'Id: {}'.format(text['Id']) 24 | if 'ParentId' in text: 25 | print 'Parent Id: {}'.format(text['ParentId']) 26 | print 'Type:' + text['Type'] 27 | print 28 | 29 | -------------------------------------------------------------------------------- /doc_source/infrastructure-security.md: -------------------------------------------------------------------------------- 1 | # Infrastructure security in Amazon Rekognition 2 | 3 | As a managed service, Amazon Rekognition is protected by the AWS global network security procedures that are described in the [Amazon Web Services: Overview of Security Processes](https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Whitepaper.pdf) whitepaper\. 4 | 5 | You use AWS published API calls to access Amazon Rekognition through the network\. Clients must support Transport Layer Security \(TLS\) 1\.0 or later\. We recommend TLS 1\.2 or later\. Clients must also support cipher suites with perfect forward secrecy \(PFS\) such as Ephemeral Diffie\-Hellman \(DHE\) or Elliptic Curve Ephemeral Diffie\-Hellman \(ECDHE\)\. Most modern systems such as Java 7 and later support these modes\. 6 | 7 | Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal\. Or you can use the [AWS Security Token Service](https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html) \(AWS STS\) to generate temporary security credentials to sign requests\. -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-list-faces-in-collection.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | bucket='bucket' 9 | collectionId='MyCollection' 10 | maxResults=2 11 | tokens=True 12 | 13 | client=boto3.client('rekognition') 14 | response=client.list_faces(CollectionId=collectionId, 15 | MaxResults=maxResults) 16 | 17 | print('Faces in collection ' + collectionId) 18 | 19 | 20 | while tokens: 21 | 22 | faces=response['Faces'] 23 | 24 | for face in faces: 25 | print (face) 26 | if 'NextToken' in response: 27 | nextToken=response['NextToken'] 28 | response=client.list_faces(CollectionId=collectionId, 29 | NextToken=nextToken,MaxResults=maxResults) 30 | else: 31 | tokens=False 32 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-recognize-celebrities.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | import json 6 | 7 | if __name__ == "__main__": 8 | photo='moviestars.jpg' 9 | 10 | client=boto3.client('rekognition') 11 | 12 | with open(photo, 'rb') as image: 13 | response = client.recognize_celebrities(Image={'Bytes': image.read()}) 14 | 15 | print('Detected faces for ' + photo) 16 | for celebrity in response['CelebrityFaces']: 17 | print 'Name: ' + celebrity['Name'] 18 | print 'Id: ' + celebrity['Id'] 19 | print 'Position:' 20 | print ' Left: ' + '{:.2f}'.format(celebrity['Face']['BoundingBox']['Height']) 21 | print ' Top: ' + '{:.2f}'.format(celebrity['Face']['BoundingBox']['Top']) 22 | print 'Info' 23 | for url in celebrity['Urls']: 24 | print ' ' + url 25 | print 26 | 27 | 28 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-search-faces-collection.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | bucket='bucket' 9 | collectionId='MyCollection' 10 | threshold = 50 11 | maxFaces=2 12 | faceId='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' 13 | 14 | client=boto3.client('rekognition') 15 | 16 | 17 | response=client.search_faces(CollectionId=collectionId, 18 | FaceId=faceId, 19 | FaceMatchThreshold=threshold, 20 | MaxFaces=maxFaces) 21 | 22 | 23 | faceMatches=response['FaceMatches'] 24 | print 'Matching faces' 25 | for match in faceMatches: 26 | print 'FaceId:' + match['Face']['FaceId'] 27 | print 'Similarity: ' + "{:.2f}".format(match['Similarity']) + "%" 28 | print 29 | 30 | -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-delete-faces.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using System.Collections.Generic; 6 | using Amazon.Rekognition; 7 | using Amazon.Rekognition.Model; 8 | 9 | public class DeleteFaces 10 | { 11 | public static void Example() 12 | { 13 | String collectionId = "MyCollection"; 14 | List faces = new List() { "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" }; 15 | 16 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 17 | 18 | DeleteFacesRequest deleteFacesRequest = new DeleteFacesRequest() 19 | { 20 | CollectionId = collectionId, 21 | FaceIds = faces 22 | }; 23 | 24 | DeleteFacesResponse deleteFacesResponse = rekognitionClient.DeleteFaces(deleteFacesRequest); 25 | foreach (String face in deleteFacesResponse.DeletedFaces) 26 | Console.WriteLine("FaceID: " + face); 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-create-collection.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using Amazon.Rekognition; 6 | using Amazon.Rekognition.Model; 7 | 8 | public class CreateCollection 9 | { 10 | public static void Example() 11 | { 12 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 13 | 14 | String collectionId = "MyCollection"; 15 | Console.WriteLine("Creating collection: " + collectionId); 16 | 17 | CreateCollectionRequest createCollectionRequest = new CreateCollectionRequest() 18 | { 19 | CollectionId = collectionId 20 | }; 21 | 22 | CreateCollectionResponse createCollectionResponse = rekognitionClient.CreateCollection(createCollectionRequest); 23 | Console.WriteLine("CollectionArn : " + createCollectionResponse.CollectionArn); 24 | Console.WriteLine("Status code : " + createCollectionResponse.StatusCode); 25 | 26 | } 27 | } 28 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-search-faces-by-image-collection.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | bucket='bucket' 9 | collectionId='MyCollection' 10 | fileName='input.jpg' 11 | threshold = 70 12 | maxFaces=2 13 | 14 | client=boto3.client('rekognition') 15 | 16 | 17 | response=client.search_faces_by_image(CollectionId=collectionId, 18 | Image={'S3Object':{'Bucket':bucket,'Name':fileName}}, 19 | FaceMatchThreshold=threshold, 20 | MaxFaces=maxFaces) 21 | 22 | 23 | faceMatches=response['FaceMatches'] 24 | print 'Matching faces' 25 | for match in faceMatches: 26 | print 'FaceId:' + match['Face']['FaceId'] 27 | print 'Similarity: ' + "{:.2f}".format(match['Similarity']) + "%" 28 | print 29 | 30 | -------------------------------------------------------------------------------- /doc_source/security.md: -------------------------------------------------------------------------------- 1 | # Amazon Rekognition Security 2 | 3 | Cloud security at AWS is the highest priority\. As an AWS customer, you benefit from a data center and network architecture that are built to meet the requirements of the most security\-sensitive organizations\. 4 | 5 | Use the following topics to learn how to secure your Amazon Rekognition resources\. 6 | 7 | **Topics** 8 | + [Identity and access management for Amazon Rekognition](security-iam.md) 9 | + [Data protection in Amazon Rekognition](data-protection.md) 10 | + [Monitoring Rekognition](rekognition-monitoring.md) 11 | + [Logging Amazon Rekognition API calls with AWS CloudTrail](logging-using-cloudtrail.md) 12 | + [Using Amazon Rekognition with Amazon VPC endpoints](vpc.md) 13 | + [Compliance validation for Amazon Rekognition](rekognition-compliance.md) 14 | + [Resilience in Amazon Rekognition](disaster-recovery-resiliency.md) 15 | + [Configuration and vulnerability analysis in Amazon Rekognition](vulnerability-analysis-and-management.md) 16 | + [Cross\-service confused deputy prevention](cross-service-confused-deputy-prevention.md) 17 | + [Infrastructure security in Amazon Rekognition](infrastructure-security.md) -------------------------------------------------------------------------------- /doc_source/getting-started-console.md: -------------------------------------------------------------------------------- 1 | # Step 4: Getting started using the Amazon Rekognition console 2 | 3 | This section shows you how to use a subset of Amazon Rekognition's capabilities such as object and scene detection, facial analysis, and face comparison in a set of images\. For more information, see [How Amazon Rekognition works](how-it-works.md)\. You can also use the Amazon Rekognition API or AWS CLI to detect objects and scenes, detect faces, and compare and search faces\. For more information, see [Step 3: Getting started using the AWS CLI and AWS SDK API](get-started-exercise.md)\. 4 | 5 | This section also shows you how to see aggregated Amazon CloudWatch metrics for Rekognition by using the Rekognition console\. 6 | 7 | **Topics** 8 | + [Exercise 1: Detect objects and scenes \(Console\)](detect-labels-console.md) 9 | + [Exercise 2: Analyze faces in an image \(console\)](detect-faces-console.md) 10 | + [Exercise 3: Compare faces in images \(console\)](compare-faces-console.md) 11 | + [Exercise 4: See aggregated metrics \(console\)](aggregated-metrics.md) 12 | 13 | ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/amazon-rekognition-start-page.png) -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-delete-collection.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | from botocore.exceptions import ClientError 6 | from os import environ 7 | 8 | if __name__ == "__main__": 9 | 10 | collectionId='MyCollection' 11 | print('Attempting to delete collection ' + collectionId) 12 | client=boto3.client('rekognition') 13 | statusCode='' 14 | try: 15 | response=client.delete_collection(CollectionId=collectionId) 16 | statusCode=response['StatusCode'] 17 | 18 | except ClientError as e: 19 | if e.response['Error']['Code'] == 'ResourceNotFoundException': 20 | print ('The collection ' + collectionId + ' was not found ') 21 | else: 22 | print ('Error other than Not Found occurred: ' + e.response['Error']['Message']) 23 | statusCode=e.response['ResponseMetadata']['HTTPStatusCode'] 24 | print('Operation returned Status Code: ' + str(statusCode)) 25 | print('Done...') 26 | 27 | 28 | 29 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-compare-faces.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | sourceFile='source.jpg' 9 | targetFile='target.jpg' 10 | client=boto3.client('rekognition') 11 | 12 | imageSource=open(sourceFile,'rb') 13 | imageTarget=open(targetFile,'rb') 14 | 15 | response=client.compare_faces(SimilarityThreshold=70, 16 | SourceImage={'Bytes': imageSource.read()}, 17 | TargetImage={'Bytes': imageTarget.read()}) 18 | 19 | for faceMatch in response['FaceMatches']: 20 | position = faceMatch['Face']['BoundingBox'] 21 | confidence = str(faceMatch['Face']['Confidence']) 22 | print('The face at ' + 23 | str(position['Left']) + ' ' + 24 | str(position['Top']) + 25 | ' matches with ' + confidence + '% confidence') 26 | 27 | imageSource.close() 28 | imageTarget.close() 29 | 30 | -------------------------------------------------------------------------------- /doc_source/recommendations-camera-stored-streaming-video.md: -------------------------------------------------------------------------------- 1 | # Recommendations for camera setup \(stored and streaming video\) 2 | 3 | The following recommendations are in addition to [Recommendations for camera setup \(image and video\)](recommendations-camera-image-video.md)\. 4 | + The codec should be h\.264 encoded\. 5 | + The recommended frame rate is 30 fps\. \(It should not be less than 5 fps\.\) 6 | + The recommended encoder bitrate is 3 Mbps\. \(It should not be less than 1\.5 Mbps\.\) 7 | + Frame Rate vs\. Frame Resolution – If the encoder bitrate is a constraint, we recommend favoring a higher frame resolution over a higher frame rate for better face search results\. This ensures that Amazon Rekognition gets the best quality frame within the allocated bitrate\. However, there is a downside to this\. Because of the low frame rate, the camera misses fast motion in a scene\. It's important to understand the trade\-offs between these two parameters for a given setup\. For example, if the maximum possible bitrate is 1\.5 Mbps, a camera can capture 1080p at 5 fps or 720p at 15 fps\. The choice between the two is application dependent, as long as the recommended face resolution of 50 x 50 pixels is met\. -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-celebrity-info.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using Amazon.Rekognition; 6 | using Amazon.Rekognition.Model; 7 | 8 | 9 | public class CelebrityInfo 10 | { 11 | public static void Example() 12 | { 13 | String id = "nnnnnnnn"; 14 | 15 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 16 | 17 | GetCelebrityInfoRequest celebrityInfoRequest = new GetCelebrityInfoRequest() 18 | { 19 | Id = id 20 | }; 21 | 22 | Console.WriteLine("Getting information for celebrity: " + id); 23 | 24 | GetCelebrityInfoResponse celebrityInfoResponse = rekognitionClient.GetCelebrityInfo(celebrityInfoRequest); 25 | 26 | //Display celebrity information 27 | Console.WriteLine("celebrity name: " + celebrityInfoResponse.Name); 28 | Console.WriteLine("Further information (if available):"); 29 | foreach (String url in celebrityInfoResponse.Urls) 30 | Console.WriteLine(url); 31 | } 32 | } -------------------------------------------------------------------------------- /doc_source/streaming-video-kinesis-output-reference-kinesisvideostreams-kinesisvideo.md: -------------------------------------------------------------------------------- 1 | # KinesisVideo 2 | 3 | Information about the Kinesis video stream that streams the source video into Amazon Rekognition Video\. For more information, see [Working with streaming video events](streaming-video.md)\. 4 | 5 | **StreamArn** 6 | 7 | The Amazon Resource Name \(ARN\) of the Kinesis video stream\. 8 | 9 | Type: String 10 | 11 | **FragmentNumber** 12 | 13 | The fragment of streaming video that contains the frame that this record represents\. 14 | 15 | Type: String 16 | 17 | **ProducerTimestamp** 18 | 19 | The producer\-side Unix time stamp of the fragment\. For more information, see [PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html)\. 20 | 21 | Type: Number 22 | 23 | **ServerTimestamp** 24 | 25 | The server\-side Unix time stamp of the fragment\. For more information, see [PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html)\. 26 | 27 | Type: Number 28 | 29 | **FrameOffsetInSeconds** 30 | 31 | The offset of the frame \(in seconds\) inside the fragment\. 32 | 33 | Type: Number -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-describe-collection.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | from botocore.exceptions import ClientError 6 | 7 | if __name__ == "__main__": 8 | 9 | collectionId='MyCollection' 10 | print('Attempting to describe collection ' + collectionId) 11 | client=boto3.client('rekognition') 12 | 13 | try: 14 | response=client.describe_collection(CollectionId=collectionId) 15 | print("Collection Arn: " + response['CollectionARN']) 16 | print("Face Count: " + str(response['FaceCount'])) 17 | print("Face Model Version: " + response['FaceModelVersion']) 18 | print("Timestamp: " + str(response['CreationTimestamp'])) 19 | 20 | 21 | except ClientError as e: 22 | if e.response['Error']['Code'] == 'ResourceNotFoundException': 23 | print ('The collection ' + collectionId + ' was not found ') 24 | else: 25 | print ('Error other than Not Found occurred: ' + e.response['Error']['Message']) 26 | print('Done...') 27 | 28 | 29 | 30 | 31 | 32 | -------------------------------------------------------------------------------- /code_examples/java_examples/image/java-delete-collection.java: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | package aws.example.rekognition.image; 5 | import com.amazonaws.services.rekognition.AmazonRekognition; 6 | import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder; 7 | import com.amazonaws.services.rekognition.model.DeleteCollectionRequest; 8 | import com.amazonaws.services.rekognition.model.DeleteCollectionResult; 9 | 10 | 11 | public class DeleteCollection { 12 | 13 | public static void main(String[] args) throws Exception { 14 | 15 | AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); 16 | 17 | String collectionId = "MyCollection"; 18 | 19 | System.out.println("Deleting collections"); 20 | 21 | DeleteCollectionRequest request = new DeleteCollectionRequest() 22 | .withCollectionId(collectionId); 23 | DeleteCollectionResult deleteCollectionResult = rekognitionClient.deleteCollection(request); 24 | 25 | System.out.println(collectionId + ": " + deleteCollectionResult.getStatusCode() 26 | .toString()); 27 | 28 | } 29 | 30 | } 31 | 32 | 33 | 34 | 35 | -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-search-faces-matching-id.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using Amazon.Rekognition; 6 | using Amazon.Rekognition.Model; 7 | 8 | public class SearchFacesMatchingId 9 | { 10 | public static void Example() 11 | { 12 | String collectionId = "MyCollection"; 13 | String faceId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"; 14 | 15 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 16 | 17 | // Search collection for faces matching the face id. 18 | 19 | SearchFacesRequest searchFacesRequest = new SearchFacesRequest() 20 | { 21 | CollectionId = collectionId, 22 | FaceId = faceId, 23 | FaceMatchThreshold = 70F, 24 | MaxFaces = 2 25 | }; 26 | 27 | SearchFacesResponse searchFacesResponse = rekognitionClient.SearchFaces(searchFacesRequest); 28 | 29 | Console.WriteLine("Face matching faceId " + faceId); 30 | 31 | Console.WriteLine("Matche(s): "); 32 | foreach (FaceMatch face in searchFacesResponse.FaceMatches) 33 | Console.WriteLine("FaceId: " + face.Face.FaceId + ", Similarity: " + face.Similarity); 34 | } 35 | } 36 | -------------------------------------------------------------------------------- /code_examples/java_examples/image/java-celebrity-info.java: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | package aws.example.rekognition.image; 5 | import com.amazonaws.services.rekognition.AmazonRekognition; 6 | import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder; 7 | import com.amazonaws.services.rekognition.model.GetCelebrityInfoRequest; 8 | import com.amazonaws.services.rekognition.model.GetCelebrityInfoResult; 9 | 10 | 11 | public class CelebrityInfo { 12 | 13 | public static void main(String[] args) { 14 | String id = "nnnnnnnn"; 15 | 16 | AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); 17 | 18 | GetCelebrityInfoRequest request = new GetCelebrityInfoRequest() 19 | .withId(id); 20 | 21 | System.out.println("Getting information for celebrity: " + id); 22 | 23 | GetCelebrityInfoResult result=rekognitionClient.getCelebrityInfo(request); 24 | 25 | //Display celebrity information 26 | System.out.println("celebrity name: " + result.getName()); 27 | System.out.println("Further information (if available):"); 28 | for (String url: result.getUrls()){ 29 | System.out.println(url); 30 | } 31 | } 32 | } 33 | -------------------------------------------------------------------------------- /code_examples/java_examples/streaming_video/java-stream-processor-starter.java: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | // Starter class. Use to create a StreamManager class 5 | // and call stream processor operations. 6 | package com.amazonaws.samples; 7 | import com.amazonaws.samples.*; 8 | 9 | public class Starter { 10 | 11 | public static void main(String[] args) { 12 | 13 | 14 | String streamProcessorName="Stream Processor Name"; 15 | String kinesisVideoStreamArn="Kinesis Video Stream Arn"; 16 | String kinesisDataStreamArn="Kinesis Data Stream Arn"; 17 | String roleArn="Role Arn"; 18 | String collectionId="Collection ID"; 19 | Float matchThreshold=50F; 20 | 21 | try { 22 | StreamManager sm= new StreamManager(streamProcessorName, 23 | kinesisVideoStreamArn, 24 | kinesisDataStreamArn, 25 | roleArn, 26 | collectionId, 27 | matchThreshold); 28 | //sm.createStreamProcessor(); 29 | //sm.startStreamProcessor(); 30 | //sm.deleteStreamProcessor(); 31 | //sm.deleteStreamProcessor(); 32 | //sm.stopStreamProcessor(); 33 | //sm.listStreamProcessors(); 34 | //sm.describeStreamProcessor(); 35 | } 36 | catch(Exception e){ 37 | System.out.println(e.getMessage()); 38 | } 39 | } 40 | } 41 | 42 | -------------------------------------------------------------------------------- /doc_source/getting-started.md: -------------------------------------------------------------------------------- 1 | # Getting started with Amazon Rekognition 2 | 3 | This section provides topics to get you started using Amazon Rekognition\. If you're new to Amazon Rekognition, we recommend that you first review the concepts and terminology presented in [How Amazon Rekognition works](how-it-works.md)\. 4 | 5 | Before you can use Rekognition, you'll need to create an AWS account and obtain an AWS account ID\. You will also want to create an IAM user, which enables the Amazon Rekognition system to determine if you have the permissions needed to access its resources\. 6 | 7 | After creating your accounts, you'll want to install and configure the AWS CLI and AWS SDKs\. The AWS CLI lets you interact with Amazon Rekognition and other services through the command line, while the AWS SDKs let you use programming languages like Java and Python to interact with Amazon Rekognition\. 8 | 9 | Once you have set up the AWS CLI and AWS SDKs, you can look at some examples of how to use both of them\. You can also view some examples of how to interact with Amazon Rekognition using the console\. 10 | 11 | **Topics** 12 | + [Step 1: Set up an AWS account and create an IAM user](setting-up.md) 13 | + [Step 2: Set up the AWS CLI and AWS SDKs](setup-awscli-sdk.md) 14 | + [Step 3: Getting started using the AWS CLI and AWS SDK API](get-started-exercise.md) 15 | + [Step 4: Getting started using the Amazon Rekognition console](getting-started-console.md) -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-list-faces.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using Amazon.Rekognition; 6 | using Amazon.Rekognition.Model; 7 | 8 | public class ListFaces 9 | { 10 | public static void Example() 11 | { 12 | String collectionId = "MyCollection"; 13 | 14 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 15 | 16 | ListFacesResponse listFacesResponse = null; 17 | Console.WriteLine("Faces in collection " + collectionId); 18 | 19 | String paginationToken = null; 20 | do 21 | { 22 | if (listFacesResponse != null) 23 | paginationToken = listFacesResponse.NextToken; 24 | 25 | ListFacesRequest listFacesRequest = new ListFacesRequest() 26 | { 27 | CollectionId = collectionId, 28 | MaxResults = 1, 29 | NextToken = paginationToken 30 | }; 31 | 32 | listFacesResponse = rekognitionClient.ListFaces(listFacesRequest); 33 | foreach(Face face in listFacesResponse.Faces) 34 | Console.WriteLine(face.FaceId); 35 | } while (listFacesResponse != null && !String.IsNullOrEmpty(listFacesResponse.NextToken)); 36 | } 37 | } 38 | -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-list-collections.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using Amazon.Rekognition; 6 | using Amazon.Rekognition.Model; 7 | 8 | public class ListCollections 9 | { 10 | public static void Example() 11 | { 12 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 13 | 14 | Console.WriteLine("Listing collections"); 15 | int limit = 10; 16 | 17 | ListCollectionsResponse listCollectionsResponse = null; 18 | String paginationToken = null; 19 | do 20 | { 21 | if (listCollectionsResponse != null) 22 | paginationToken = listCollectionsResponse.NextToken; 23 | 24 | ListCollectionsRequest listCollectionsRequest = new ListCollectionsRequest() 25 | { 26 | MaxResults = limit, 27 | NextToken = paginationToken 28 | }; 29 | 30 | listCollectionsResponse = rekognitionClient.ListCollections(listCollectionsRequest); 31 | 32 | foreach (String resultId in listCollectionsResponse.CollectionIds) 33 | Console.WriteLine(resultId); 34 | } while (listCollectionsResponse != null && listCollectionsResponse.NextToken != null); 35 | } 36 | } 37 | -------------------------------------------------------------------------------- /code_examples/python_examples/image/python-add-faces-to-collection.py: -------------------------------------------------------------------------------- 1 | #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | import boto3 5 | 6 | if __name__ == "__main__": 7 | 8 | bucket='bucket' 9 | collectionId='MyCollection' 10 | photo='photo' 11 | 12 | client=boto3.client('rekognition') 13 | 14 | response=client.index_faces(CollectionId=collectionId, 15 | Image={'S3Object':{'Bucket':bucket,'Name':photo}}, 16 | ExternalImageId=photo, 17 | MaxFaces=1, 18 | QualityFilter="AUTO", 19 | DetectionAttributes=['ALL']) 20 | 21 | print ('Results for ' + photo) 22 | print('Faces indexed:') 23 | for faceRecord in response['FaceRecords']: 24 | print(' Face ID: ' + faceRecord['Face']['FaceId']) 25 | print(' Location: {}'.format(faceRecord['Face']['BoundingBox'])) 26 | 27 | print('Faces not indexed:') 28 | for unindexedFace in response['UnindexedFaces']: 29 | print(' Location: {}'.format(unindexedFace['FaceDetail']['BoundingBox'])) 30 | print(' Reasons:') 31 | for reason in unindexedFace['Reasons']: 32 | print(' ' + reason) 33 | 34 | -------------------------------------------------------------------------------- /doc_source/streaming-video-kinesis-output-reference-detectedface.md: -------------------------------------------------------------------------------- 1 | # DetectedFace 2 | 3 | Information about a face that's detected in a streaming video frame\. Matching faces in the input collection are available in [MatchedFace](streaming-video-kinesis-output-reference-facematch.md) object field\. 4 | 5 | **BoundingBox** 6 | 7 | The bounding box coordinates for a face that's detected within an analyzed video frame\. The BoundingBox object has the same properties as the BoundingBox object that's used for image analysis\. 8 | 9 | Type: [BoundingBox](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_BoundingBox.html) object 10 | 11 | **Confidence** 12 | 13 | The confidence level \(1\-100\) that Amazon Rekognition Video has that the detected face is actually a face\. 1 is the lowest confidence, 100 is the highest\. 14 | 15 | Type: Number 16 | 17 | **Landmarks** 18 | 19 | An array of facial landmarks\. 20 | 21 | Type: [Landmark](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_Landmark.html) object array 22 | 23 | **Pose** 24 | 25 | Indicates the pose of the face as determined by its pitch, roll, and yaw\. 26 | 27 | Type: [Pose](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_Pose.html) object 28 | 29 | **Quality** 30 | 31 | Identifies face image brightness and sharpness\. 32 | 33 | Type: [ImageQuality](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_ImageQuality.html) object -------------------------------------------------------------------------------- /doc_source/security_iam_resource-based-policy-examples.md: -------------------------------------------------------------------------------- 1 | # Amazon Rekognition Resource\-Based Policy Examples 2 | 3 | Amazon Rekognition Custom Labels uses resource\-based polices, known as *project policies*, to manage copy permissions for a model version\. 4 | 5 | A project policy gives or denies permission to copy a model version from a source project to a destination project\. You need a project policy if the destination project is in a different AWS account or if you want to restrict access within an AWS account, For example, you might want to deny copy permissions to a specific IAM role\. For more information, see [Copying a model](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/md-copy-model-overview.html)\. 6 | 7 | ## Giving permission to copy a model version 8 | 9 | The following example allows the principal `arn:aws:iam::123456789012:role/Admin` to copy the model version `arn:aws:rekognition:us-east-1:123456789012:project/my_project/version/test_1/1627045542080`\. 10 | 11 | ``` 12 | { 13 | "Version":"2012-10-17", 14 | "Statement":[ 15 | { 16 | "Effect":"Allow", 17 | "Principal":{ 18 | "AWS":"arn:aws:iam::123456789012:role/Admin" 19 | }, 20 | "Action":"rekognition:CopyProjectVersion", 21 | "Resource":"arn:aws:rekognition:us-east-1:123456789012:project/my_project/version/test_1/1627045542080" 22 | } 23 | ] 24 | } 25 | ``` -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-detect-labels.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using Amazon.Rekognition; 6 | using Amazon.Rekognition.Model; 7 | 8 | public class DetectLabels 9 | { 10 | public static void Example() 11 | { 12 | String photo = "input.jpg"; 13 | String bucket = "bucket"; 14 | 15 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 16 | 17 | DetectLabelsRequest detectlabelsRequest = new DetectLabelsRequest() 18 | { 19 | Image = new Image() 20 | { 21 | S3Object = new S3Object() 22 | { 23 | Name = photo, 24 | Bucket = bucket 25 | }, 26 | }, 27 | MaxLabels = 10, 28 | MinConfidence = 75F 29 | }; 30 | 31 | try 32 | { 33 | DetectLabelsResponse detectLabelsResponse = rekognitionClient.DetectLabels(detectlabelsRequest); 34 | Console.WriteLine("Detected labels for " + photo); 35 | foreach (Label label in detectLabelsResponse.Labels) 36 | Console.WriteLine("{0}: {1}", label.Name, label.Confidence); 37 | } 38 | catch (Exception e) 39 | { 40 | Console.WriteLine(e.Message); 41 | } 42 | } 43 | } -------------------------------------------------------------------------------- /code_examples/java_examples/image/java-create-collection.java: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | package aws.example.rekognition.image; 5 | 6 | import com.amazonaws.services.rekognition.AmazonRekognition; 7 | import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder; 8 | import com.amazonaws.services.rekognition.model.CreateCollectionRequest; 9 | import com.amazonaws.services.rekognition.model.CreateCollectionResult; 10 | 11 | 12 | public class CreateCollection { 13 | 14 | public static void main(String[] args) throws Exception { 15 | 16 | 17 | AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); 18 | 19 | 20 | String collectionId = "MyCollection"; 21 | System.out.println("Creating collection: " + 22 | collectionId ); 23 | 24 | CreateCollectionRequest request = new CreateCollectionRequest() 25 | .withCollectionId(collectionId); 26 | 27 | CreateCollectionResult createCollectionResult = rekognitionClient.createCollection(request); 28 | System.out.println("CollectionArn : " + 29 | createCollectionResult.getCollectionArn()); 30 | System.out.println("Status code : " + 31 | createCollectionResult.getStatusCode().toString()); 32 | 33 | } 34 | 35 | } 36 | 37 | 38 | -------------------------------------------------------------------------------- /doc_source/images.md: -------------------------------------------------------------------------------- 1 | # Working with images 2 | 3 | This section covers the types of analysis that Amazon Rekognition Image can perform on images\. 4 | + [Object and scene detection](labels.md) 5 | + [Face detection and comparison](faces.md) 6 | + [Searching faces in a collection](collections.md) 7 | + [Celebrity recognition](celebrities.md) 8 | + [Image moderation](moderation.md) 9 | + [Text in image detection](text-detection.md) 10 | 11 | These are performed by non\-storage API operations where Amazon Rekognition Image doesn't persist any information discovered by the operation\. No input image bytes are persisted by non\-storage API operations\. For more information, see [Non\-storage and storage API operations](how-it-works-storage-non-storage.md)\. 12 | 13 | Amazon Rekognition Image can also store facial metadata in collections for later retrieval\. For more information, see [Searching faces in a collection](collections.md)\. 14 | 15 | In this section, you use the Amazon Rekognition Image API operations to analyze images stored in an Amazon S3 bucket and image bytes loaded from the local file system\. This section also covers getting image orientation information from a \.jpg image\. 16 | 17 | **Topics** 18 | + [Image specifications](images-information.md) 19 | + [Analyzing images stored in an Amazon S3 bucket](images-s3.md) 20 | + [Analyzing an image loaded from a local file system](images-bytes.md) 21 | + [Displaying bounding boxes](images-displaying-bounding-boxes.md) 22 | + [Getting image orientation and bounding box coordinates](images-orientation.md) -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-add-faces.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using System.Collections.Generic; 6 | using Amazon.Rekognition; 7 | using Amazon.Rekognition.Model; 8 | 9 | public class AddFaces 10 | { 11 | public static void Example() 12 | { 13 | String collectionId = "MyCollection"; 14 | String bucket = "bucket"; 15 | String photo = "input.jpg"; 16 | 17 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 18 | 19 | Image image = new Image() 20 | { 21 | S3Object = new S3Object() 22 | { 23 | Bucket = bucket, 24 | Name = photo 25 | } 26 | }; 27 | 28 | IndexFacesRequest indexFacesRequest = new IndexFacesRequest() 29 | { 30 | Image = image, 31 | CollectionId = collectionId, 32 | ExternalImageId = photo, 33 | DetectionAttributes = new List(){ "ALL" } 34 | }; 35 | 36 | IndexFacesResponse indexFacesResponse = rekognitionClient.IndexFaces(indexFacesRequest); 37 | 38 | Console.WriteLine(photo + " added"); 39 | foreach (FaceRecord faceRecord in indexFacesResponse.FaceRecords) 40 | Console.WriteLine("Face detected: Faceid is " + 41 | faceRecord.Face.FaceId); 42 | } 43 | } 44 | -------------------------------------------------------------------------------- /code_examples/php_examples/php-detect-labels-local-file.php: -------------------------------------------------------------------------------- 1 | 2 | 'us-west-2', 12 | 'version' => 'latest' 13 | ]; 14 | 15 | $rekognition = new RekognitionClient($options); 16 | 17 | // Get local image 18 | $photo = 'input.jpg'; 19 | $fp_image = fopen($photo, 'r'); 20 | $image = fread($fp_image, filesize($photo)); 21 | fclose($fp_image); 22 | 23 | 24 | // Call DetectFaces 25 | $result = $rekognition->DetectFaces(array( 26 | 'Image' => array( 27 | 'Bytes' => $image, 28 | ), 29 | 'Attributes' => array('ALL') 30 | ) 31 | ); 32 | 33 | // Display info for each detected person 34 | print 'People: Image position and estimated age' . PHP_EOL; 35 | for ($n=0;$n -------------------------------------------------------------------------------- /code_examples/javascript_examples/js-estimate-age-snippet.html: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | function ProcessImage() { 5 | AnonLog(); 6 | var control = document.getElementById("fileToUpload"); 7 | var file = control.files[0]; 8 | 9 | // Load base64 encoded image for display 10 | var reader = new FileReader(); 11 | reader.onload = (function (theFile) { 12 | return function (e) { 13 | //Call Rekognition 14 | AWS.region = "RegionToUse"; 15 | var rekognition = new AWS.Rekognition(); 16 | var params = { 17 | Image: { 18 | Bytes: e.target.result 19 | }, 20 | Attributes: [ 21 | 'ALL', 22 | ] 23 | }; 24 | rekognition.detectFaces(params, function (err, data) { 25 | if (err) console.log(err, err.stack); // an error occurred 26 | else { 27 | var table = ""; 28 | // show each face and build out estimated age table 29 | for (var i = 0; i < data.FaceDetails.length; i++) { 30 | table += ''; 32 | } 33 | table += "
LowHigh
' + data.FaceDetails[i].AgeRange.Low + 31 | '' + data.FaceDetails[i].AgeRange.High + '
"; 34 | document.getElementById("opResult").innerHTML = table; 35 | } 36 | }); 37 | 38 | }; 39 | })(file); 40 | reader.readAsArrayBuffer(file); 41 | } -------------------------------------------------------------------------------- /doc_source/recommendations-camera-streaming-video.md: -------------------------------------------------------------------------------- 1 | # Recommendations for camera setup \(streaming video\) 2 | 3 | 4 | 5 | The following recommendation is in addition to [Recommendations for camera setup \(stored and streaming video\)](recommendations-camera-stored-streaming-video.md)\. 6 | 7 | An additional constraint with streaming applications is internet bandwidth\. For live video, Amazon Rekognition only accepts Amazon Kinesis Video Streams as an input\. You should understand the dependency between the encoder bitrate and the available network bandwidth\. Available bandwidth should, at a minimum, support the same bitrate that the camera is using to encode the live stream\. This ensures that whatever the camera captures is relayed through Amazon Kinesis Video Streams\. If the available bandwidth is less than the encoder bitrate, Amazon Kinesis Video Streams drops bits based on the network bandwidth\. This results in low video quality\. 8 | 9 | A typical streaming setup involves connecting multiple cameras to a network hub that relays the streams\. In this case, the bandwidth should accommodate the cumulative sum of the streams coming from all cameras connected to the hub\. For example, if the hub is connected to five cameras encoding at 1\.5 Mbps, the available network bandwidth should be at least 7\.5 Mbps\. To ensure that there are no dropped packets, you should consider keeping the network bandwidth higher than 7\.5 Mbps to accommodate for jitters due to dropped connections between a camera and the hub\. The actual value depends on the reliability of the internal network\. -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-search-faces-matching-image.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using Amazon.Rekognition; 6 | using Amazon.Rekognition.Model; 7 | 8 | public class SearchFacesMatchingImage 9 | { 10 | public static void Example() 11 | { 12 | String collectionId = "MyCollection"; 13 | String bucket = "bucket"; 14 | String photo = "input.jpg"; 15 | 16 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 17 | 18 | // Get an image object from S3 bucket. 19 | Image image = new Image() 20 | { 21 | S3Object = new S3Object() 22 | { 23 | Bucket = bucket, 24 | Name = photo 25 | } 26 | }; 27 | 28 | SearchFacesByImageRequest searchFacesByImageRequest = new SearchFacesByImageRequest() 29 | { 30 | CollectionId = collectionId, 31 | Image = image, 32 | FaceMatchThreshold = 70F, 33 | MaxFaces = 2 34 | }; 35 | 36 | SearchFacesByImageResponse searchFacesByImageResponse = rekognitionClient.SearchFacesByImage(searchFacesByImageRequest); 37 | 38 | Console.WriteLine("Faces matching largest face in image from " + photo); 39 | foreach (FaceMatch face in searchFacesByImageResponse.FaceMatches) 40 | Console.WriteLine("FaceId: " + face.Face.FaceId + ", Similarity: " + face.Similarity); 41 | } 42 | } 43 | -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-detect-moderation-labels.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using Amazon.Rekognition; 6 | using Amazon.Rekognition.Model; 7 | 8 | public class DetectModerationLabels 9 | { 10 | public static void Example() 11 | { 12 | String photo = "input.jpg"; 13 | String bucket = "bucket"; 14 | 15 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 16 | 17 | DetectModerationLabelsRequest detectModerationLabelsRequest = new DetectModerationLabelsRequest() 18 | { 19 | Image = new Image() 20 | { 21 | S3Object = new S3Object() 22 | { 23 | Name = photo, 24 | Bucket = bucket 25 | }, 26 | }, 27 | MinConfidence = 60F 28 | }; 29 | 30 | try 31 | { 32 | DetectModerationLabelsResponse detectModerationLabelsResponse = rekognitionClient.DetectModerationLabels(detectModerationLabelsRequest); 33 | Console.WriteLine("Detected labels for " + photo); 34 | foreach (ModerationLabel label in detectModerationLabelsResponse.ModerationLabels) 35 | Console.WriteLine("Label: {0}\n Confidence: {1}\n Parent: {2}", 36 | label.Name, label.Confidence, label.ParentName); 37 | } 38 | catch (Exception e) 39 | { 40 | Console.WriteLine(e.Message); 41 | } 42 | } 43 | } -------------------------------------------------------------------------------- /doc_source/example_cross_DetectFaces_section.md: -------------------------------------------------------------------------------- 1 | # Detect faces in an image using an AWS SDK 2 | 3 | The following code example shows how to: 4 | + Save an image in an Amazon Simple Storage Service Amazon S3\) bucket\. 5 | + Use Amazon Rekognition \(Amazon Rekognition\) to detect facial details, such as age range, gender, and emotion \(smiling, etc\.\)\. 6 | + Display those details\. 7 | 8 | **Note** 9 | The source code for these examples is in the [AWS Code Examples GitHub repository](https://github.com/awsdocs/aws-doc-sdk-examples)\. Have feedback on a code example? [Create an Issue](https://github.com/awsdocs/aws-doc-sdk-examples/issues/new/choose) in the code examples repo\. 10 | 11 | ------ 12 | #### [ Rust ] 13 | 14 | **SDK for Rust** 15 | This documentation is for an SDK in preview release\. The SDK is subject to change and should not be used in production\. 16 | Save the image in an Amazon Simple Storage Service bucket with an **uploads** prefix, use Amazon Rekognition to detect facial details, such as age range, gender, and emotion \(smiling, etc\.\), and display those details\. 17 | For complete source code and instructions on how to set up and run, see the full example on [GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/rust_dev_preview/cross_service/detect_faces/src/main.rs)\. 18 | 19 | **Services used in this example** 20 | + Amazon Rekognition 21 | + Amazon S3 22 | 23 | ------ 24 | 25 | For a complete list of AWS SDK developer guides and code examples, see [Using Rekognition with an AWS SDK](sdk-general-information-section.md)\. This topic also includes information about getting started and details about previous SDK versions\. -------------------------------------------------------------------------------- /code_examples/java_examples/image/java-delete-faces-from-collection.java: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | package aws.example.rekognition.image; 5 | import com.amazonaws.services.rekognition.AmazonRekognition; 6 | import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder; 7 | import com.amazonaws.services.rekognition.model.DeleteFacesRequest; 8 | import com.amazonaws.services.rekognition.model.DeleteFacesResult; 9 | 10 | import java.util.List; 11 | 12 | 13 | public class DeleteFacesFromCollection { 14 | public static final String collectionId = "MyCollection"; 15 | public static final String faces[] = {"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"}; 16 | 17 | public static void main(String[] args) throws Exception { 18 | 19 | AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); 20 | 21 | 22 | DeleteFacesRequest deleteFacesRequest = new DeleteFacesRequest() 23 | .withCollectionId(collectionId) 24 | .withFaceIds(faces); 25 | 26 | DeleteFacesResult deleteFacesResult=rekognitionClient.deleteFaces(deleteFacesRequest); 27 | 28 | 29 | List < String > faceRecords = deleteFacesResult.getDeletedFaces(); 30 | System.out.println(Integer.toString(faceRecords.size()) + " face(s) deleted:"); 31 | for (String face: faceRecords) { 32 | System.out.println("FaceID: " + face); 33 | } 34 | } 35 | } 36 | 37 | 38 | 39 | 40 | 41 | 42 | -------------------------------------------------------------------------------- /doc_source/celebrity-recognition-vs-face-search.md: -------------------------------------------------------------------------------- 1 | # Celebrity recognition compared to face search 2 | 3 | Amazon Rekognition offers both celebrity recognition and face recognition functionality\. These functionalities have some key differences in their use cases and best practices\. 4 | 5 | Celebrity recognition comes pre\-trained with the ability to recognize hundreds of thousands of popular people in fields such as sports, media, politics, and business\. This functionality is designed to help you search large volumes of images or videos in order to identify a small set that is likely to contain a particular celebrity\. It's not intended to be used to match faces between different people that are not celebrities\. In situations where the accuracy of the celebrity match is important, we recommend also using human operators to look through this smaller amount of marked content to help ensure a high level of accuracy and the proper application of human judgment\. Celebrity recognition should not be used in a manner that could result in a negative impact on civil liberties\. 6 | 7 | In contrast, face recognition is a more general functionality that allows you to create your own face collections with your own face vectors to verify identities or search for any person, not just celebrities\. Face recognition can be used for applications such as authenticating building access, public safety, and social media\. In all these cases, it’s recommended that you use best practices, appropriate confidence thresholds \(including 99% for public safety use cases\), and human review in situations where the accuracy of the match is important\. 8 | 9 | For more information, see [Searching faces in a collection](collections.md)\. -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-detect-text.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using Amazon.Rekognition; 6 | using Amazon.Rekognition.Model; 7 | 8 | public class DetectText 9 | { 10 | public static void Example() 11 | { 12 | String photo = "input.jpg"; 13 | String bucket = "bucket"; 14 | 15 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 16 | 17 | DetectTextRequest detectTextRequest = new DetectTextRequest() 18 | { 19 | Image = new Image() 20 | { 21 | S3Object = new S3Object() 22 | { 23 | Name = photo, 24 | Bucket = bucket 25 | } 26 | } 27 | }; 28 | 29 | try 30 | { 31 | DetectTextResponse detectTextResponse = rekognitionClient.DetectText(detectTextRequest); 32 | Console.WriteLine("Detected lines and words for " + photo); 33 | foreach (TextDetection text in detectTextResponse.TextDetections) 34 | { 35 | Console.WriteLine("Detected: " + text.DetectedText); 36 | Console.WriteLine("Confidence: " + text.Confidence); 37 | Console.WriteLine("Id : " + text.Id); 38 | Console.WriteLine("Parent Id: " + text.ParentId); 39 | Console.WriteLine("Type: " + text.Type); 40 | } 41 | } 42 | catch (Exception e) 43 | { 44 | Console.WriteLine(e.Message); 45 | } 46 | } 47 | } -------------------------------------------------------------------------------- /code_examples/java_examples/image/java-describe-collection.java: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | package com.amazonaws.samples; 5 | 6 | import com.amazonaws.services.rekognition.AmazonRekognition; 7 | import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder; 8 | import com.amazonaws.services.rekognition.model.DescribeCollectionRequest; 9 | import com.amazonaws.services.rekognition.model.DescribeCollectionResult; 10 | 11 | 12 | public class DescribeCollection { 13 | 14 | public static void main(String[] args) throws Exception { 15 | 16 | String collectionId = "CollectionID"; 17 | 18 | AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); 19 | 20 | 21 | System.out.println("Describing collection: " + 22 | collectionId ); 23 | 24 | 25 | DescribeCollectionRequest request = new DescribeCollectionRequest() 26 | .withCollectionId(collectionId); 27 | 28 | DescribeCollectionResult describeCollectionResult = rekognitionClient.describeCollection(request); 29 | System.out.println("Collection Arn : " + 30 | describeCollectionResult.getCollectionARN()); 31 | System.out.println("Face count : " + 32 | describeCollectionResult.getFaceCount().toString()); 33 | System.out.println("Face model version : " + 34 | describeCollectionResult.getFaceModelVersion()); 35 | System.out.println("Created : " + 36 | describeCollectionResult.getCreationTimestamp().toString()); 37 | 38 | } 39 | 40 | } 41 | 42 | 43 | -------------------------------------------------------------------------------- /doc_source/celebrities.md: -------------------------------------------------------------------------------- 1 | # Recognizing celebrities 2 | 3 | Amazon Rekognition makes it easy for customers to automatically recognize tens of thousands of well\-known personalities in images and videos using machine learning\. The metadata provided by the celebrity recognition API significantly reduces the repetitive manual effort required to tag content and make it readily searchable\. 4 | 5 | The rapid proliferation of image and video content means that media companies often struggle to organize, search, and utilize their media catalogs at scale\. News channels and sports broadcasters often need to find images and videos quickly, in order to respond to current events and create relevant programming\. Insufficient metadata makes these tasks difficult, but with Amazon Rekognition you can automatically tag large volumes of new or archival content to make it easily searchable for a comprehensive set of international, widely known celebrities like actors, sportspeople, and online content creators\. 6 | 7 | Amazon Rekognition celebrity recognition is designed to be used exclusively in cases where you expect there may be a known celebrity in an image or a video\. For information about recognizing faces that are not celebrities, see [Searching faces in a collection](collections.md)\. 8 | 9 | **Note** 10 | If you are a celebrity and don’t want to be included in this feature, contact [AWS Support](https://aws.amazon.com/contact-us/) or email rekognition\-celebrity\-opt\-out@amazon\.com\. 11 | 12 | **Topics** 13 | + [Celebrity recognition compared to face search](celebrity-recognition-vs-face-search.md) 14 | + [Recognizing celebrities in an image](celebrities-procedure-image.md) 15 | + [Recognizing celebrities in a stored video](celebrities-video-sqs.md) 16 | + [Getting information about a celebrity](get-celebrity-info-procedure.md) -------------------------------------------------------------------------------- /doc_source/get-started-exercise.md: -------------------------------------------------------------------------------- 1 | # Step 3: Getting started using the AWS CLI and AWS SDK API 2 | 3 | After you've set up the AWS CLI and AWS SDKs that you want to use, you can build applications that use Amazon Rekognition\. The following topics show you how to get started with Amazon Rekognition Image and Amazon Rekognition Video\. 4 | + [Working with images](images.md) 5 | + [Working with stored video analysis](video.md) 6 | + [Working with streaming video events](streaming-video.md) 7 | 8 | ## Formatting the AWS CLI examples 9 | 10 | The AWS CLI examples in this guide are formatted for the Linux operating system\. To use the samples with Microsoft Windows, you need to change the JSON formatting of the `--image` parameter, and change the line breaks from backslashes \(\\\) to carets \(^\)\. For more information about JSON formatting, see [Specifying Parameter Values for the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html)\. The following is an example AWS CLI command that's formatted for Microsoft Windows\. 11 | 12 | ``` 13 | aws rekognition detect-labels ^ 14 | --image "{\"S3Object\":{\"Bucket\":\"photo-collection\",\"Name\":\"photo.jpg\"}}" ^ 15 | --region us-west-2 16 | ``` 17 | 18 | You can also provide a shorthand version of the JSON that works on both Microsoft Windows and Linux\. 19 | 20 | ``` 21 | aws rekognition detect-labels --image "S3Object={Bucket=photo-collection,Name=photo.jpg}" --region us-west-2 22 | ``` 23 | 24 | For more information, see [Using Shorthand Syntax with the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/shorthand-syntax.html)\. 25 | 26 | ## Next step 27 | 28 | [Step 4: Getting started using the Amazon Rekognition console](getting-started-console.md) -------------------------------------------------------------------------------- /code_examples/java_examples/image/java-list-collections.java: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | package aws.example.rekognition.image; 5 | 6 | import java.util.List; 7 | import com.amazonaws.services.rekognition.AmazonRekognition; 8 | import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder; 9 | import com.amazonaws.services.rekognition.model.ListCollectionsRequest; 10 | import com.amazonaws.services.rekognition.model.ListCollectionsResult; 11 | 12 | public class ListCollections { 13 | 14 | public static void main(String[] args) throws Exception { 15 | 16 | 17 | RotateImageAmazonRekognition amazonRekognition = AmazonRekognitionClientBuilder.defaultClient(); 18 | 19 | 20 | System.out.println("Listing collections"); 21 | int limit = 10; 22 | ListCollectionsResult listCollectionsResult = null; 23 | String paginationToken = null; 24 | do { 25 | if (listCollectionsResult != null) { 26 | paginationToken = listCollectionsResult.getNextToken(); 27 | } 28 | ListCollectionsRequest listCollectionsRequest = new ListCollectionsRequest() 29 | .withMaxResults(limit) 30 | .withNextToken(paginationToken); 31 | listCollectionsResult=amazonRekognition.listCollections(listCollectionsRequest); 32 | 33 | List < String > collectionIds = listCollectionsResult.getCollectionIds(); 34 | for (String resultId: collectionIds) { 35 | System.out.println(resultId); 36 | } 37 | } while (listCollectionsResult != null && listCollectionsResult.getNextToken() != 38 | null); 39 | 40 | } 41 | } -------------------------------------------------------------------------------- /code_examples/dotnet_examples/image/net-detect-labels-local-file.cs: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | using System; 5 | using System.IO; 6 | using Amazon.Rekognition; 7 | using Amazon.Rekognition.Model; 8 | 9 | public class DetectLabelsLocalfile 10 | { 11 | public static void Example() 12 | { 13 | String photo = "input.jpg"; 14 | 15 | Amazon.Rekognition.Model.Image image = new Amazon.Rekognition.Model.Image(); 16 | try 17 | { 18 | using (FileStream fs = new FileStream(photo, FileMode.Open, FileAccess.Read)) 19 | { 20 | byte[] data = null; 21 | data = new byte[fs.Length]; 22 | fs.Read(data, 0, (int)fs.Length); 23 | image.Bytes = new MemoryStream(data); 24 | } 25 | } 26 | catch (Exception) 27 | { 28 | Console.WriteLine("Failed to load file " + photo); 29 | return; 30 | } 31 | 32 | AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient(); 33 | 34 | DetectLabelsRequest detectlabelsRequest = new DetectLabelsRequest() 35 | { 36 | Image = image, 37 | MaxLabels = 10, 38 | MinConfidence = 77F 39 | }; 40 | 41 | try 42 | { 43 | DetectLabelsResponse detectLabelsResponse = rekognitionClient.DetectLabels(detectlabelsRequest); 44 | Console.WriteLine("Detected labels for " + photo); 45 | foreach (Label label in detectLabelsResponse.Labels) 46 | Console.WriteLine("{0}: {1}", label.Name, label.Confidence); 47 | } 48 | catch (Exception e) 49 | { 50 | Console.WriteLine(e.Message); 51 | } 52 | } 53 | } -------------------------------------------------------------------------------- /doc_source/example_cross_DetectLabels_section.md: -------------------------------------------------------------------------------- 1 | # Save EXIF and other image information using an AWS SDK 2 | 3 | The following code example shows how to: 4 | + Get EXIF information from a a JPG, JPEG, or PNG file\. 5 | + Upload the image file to an Amazon Simple Storage Service \(Amazon S3\) bucket\. 6 | + Use Amazon Rekognition \(Amazon Rekognition\) to identify the three top attributes \(labels in Amazon Rekognition\) in the file\. 7 | + Add the EXIF and label information to a Amazon DynamoDB \(DynamoDB\) table in the Region\. 8 | 9 | **Note** 10 | The source code for these examples is in the [AWS Code Examples GitHub repository](https://github.com/awsdocs/aws-doc-sdk-examples)\. Have feedback on a code example? [Create an Issue](https://github.com/awsdocs/aws-doc-sdk-examples/issues/new/choose) in the code examples repo\. 11 | 12 | ------ 13 | #### [ Rust ] 14 | 15 | **SDK for Rust** 16 | This documentation is for an SDK in preview release\. The SDK is subject to change and should not be used in production\. 17 | Get EXIF information from a JPG, JPEG, or PNG file, upload the image file to an Amazon Simple Storage Service bucket, use Amazon Rekognition to identify the three top attributes \(*labels* in Amazon Rekognition\) in the file, and add the EXIF and label information to a Amazon DynamoDB table in the Region\. 18 | For complete source code and instructions on how to set up and run, see the full example on [GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/rust_dev_preview/cross_service/detect_labels/src/main.rs)\. 19 | 20 | **Services used in this example** 21 | + DynamoDB 22 | + Amazon Rekognition 23 | + Amazon S3 24 | 25 | ------ 26 | 27 | For a complete list of AWS SDK developer guides and code examples, see [Using Rekognition with an AWS SDK](sdk-general-information-section.md)\. This topic also includes information about getting started and details about previous SDK versions\. -------------------------------------------------------------------------------- /code_examples/java_examples/image/java-detect-labels.java: -------------------------------------------------------------------------------- 1 | //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) 3 | 4 | package com.amazonaws.samples; 5 | import com.amazonaws.services.rekognition.AmazonRekognition; 6 | import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder; 7 | import com.amazonaws.services.rekognition.model.AmazonRekognitionException; 8 | import com.amazonaws.services.rekognition.model.DetectLabelsRequest; 9 | import com.amazonaws.services.rekognition.model.DetectLabelsResult; 10 | import com.amazonaws.services.rekognition.model.Image; 11 | import com.amazonaws.services.rekognition.model.Label; 12 | import com.amazonaws.services.rekognition.model.S3Object; 13 | import java.util.List; 14 | 15 | public class DetectLabels { 16 | 17 | public static void main(String[] args) throws Exception { 18 | 19 | String photo = "input.jpg"; 20 | String bucket = "bucket"; 21 | 22 | 23 | AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient(); 24 | 25 | DetectLabelsRequest request = new DetectLabelsRequest() 26 | .withImage(new Image() 27 | .withS3Object(new S3Object() 28 | .withName(photo).withBucket(bucket))) 29 | .withMaxLabels(10) 30 | .withMinConfidence(75F); 31 | 32 | try { 33 | DetectLabelsResult result = rekognitionClient.detectLabels(request); 34 | List