Google AI today shared that it’s created a model for detecting an endangered species of orca whales in the Salish Sea, a waterway between the United States and Canada. Underwater microphones situated at a dozen points in the Salish Sea that includes the state of Washington and Vancouver Bay are used to alert officials when a Southern Resident Killer Whale is detected.
Less than 100 of the whales are thought to still be alive, according to the Center for Whale Research.
The orca detection model is the latest from Google, and follows previous acoustic AI work to detect the sound of chainsaws in rainforests and stop illegal lumber operations and work last year with the National Oceanic and Atmospheric Administration (NOAA) in the U.S. to help protect humpback whales.
That’s why the orca model runs on a platform operated by the nonprofit Rainforest Connection. Detection alerts are sent to the smartphone of Department of Fisheries and Oceans (DFO) officials, who can choose to divert traffic or dispatch the coast guard to clear boat traffic in Vancouver Bay.
The DFO provided 1,800 hours of underwater audio recordings with 68,000 labels to train the model. Notifications also play an audio recording of sounds detected by a deep neural network so human experts can verify the prediction and make their own predictions about the whale’s current state of health.
Additional work is ongoing to better discern between the southern resident orca and other orca species and understand when specific whale sounds are associated with health problems, Google AI engineer Matt Harvey told VentureBeat.
The news was shared today in a Google AI event at company offices in San Francisco. In addition to real-time whale tracking, Google AI shared that it can now detect signs of anemia in people from retinal eye scans. This is the latest work from Google that uses computer vision to find patterns in eye scans, following work to identify diabetic retinopathy and a range of eye diseases.
Google AI shared plans to bring transcriptions to Google Translate for long-form interpretations akin to the kind of speech-to-text transcriptions now available in Pixel 4’s Recorder app. No release has been set, a company spokesperson told VentureBeat.