In the future, these efforts will be able to assist organizations such as the Canadian Coast Guard in their search and rescue efforts. Before partnering with Microsoft, InDro Robotics had to manually monitor specific environments to recognize emergency situations. In fact, each drone had to be monitored by a person, resulting in a 1:1 ratio of operators to drones. This meant that the organization needed to rely heavily on its operators, hindering their ability to leverage the full potential of the drones.
Leveraging Custom Vision Cognitive Service and other Azure services, including IoT Hub, InDro Robotics developers can now successfully equip their drones to:
- Identify objects in large bodies of water, such as life vests, boats, etc. as well as determine the severity of the findings
- Recognize emergency situations and notify control stations immediately, before assigning to a rescue squad
- Establish communication between boats, rescue squads and the control stations. The infrastructure supports permanent storage and per-device authentication
The new Custom Vision service offering made available via general preview at Build 2017, allows the ability to teach machines to identify objects. Prior to this offering, deep learning algorithms would require hundreds of images to train a model that would recognize just one type of object. This would also create another challenge as a drone would not be able to run any deep learning algorithms on demand due to its small computation capability. With Microsoft Azure and this new offering, objects can now be recognized via cloud, with very little development effort.
Here are some of the steps taken in completing this project:
Training of this new Custom Vision service works best in a closed or static environment. This is to ensure the best result is achieved in training the service for object detection. The service does not require object boundaries to be defined or any specific information regarding object locations. By simply tagging all images, the framework will compare images with different tags to define the differences in objects. While this means that less time is required to prepare data for the service, it is still very important to provide images of objects in real environments for better identification. For example, if you want to find a life vest in the water, the best training environment is the water itself. You should provide images of a life vest in the water, as well as images of the water without any objects. If you use life vest images on a white background, they will not work to train your model due to differences in environment. InDro Robotics elected to launch their drones and capture photos in real-time to properly prepare for the project.
For the full guide, go to CANITPRO.
SAMSUNG GALAXY S8 PLUS
The Samsung Galaxy S8 Plus is a beautifully crafted smartphone with nearly no bezel, curvaceous in design and reflects a…
How to: Connect to Exchange Online Using Multi-Factor Authentication
Using PowerShell to manage your Microsoft cloud services like Exchange Online and using multi-factor authentication (MFA) separately is awesome. Using…