Quantcast
Channel: Blog – Center for Data Innovation
Viewing all articles
Browse latest Browse all 1154

10 Bits: the Data News Hotlist

$
0
0

This week’s list of data news highlights covers September 8-14, 2018, and includes articles about an AI system that can identify images of child abuse and a new search engine for open data.

1. Teaching Machines to Figure Out New Objects

Researchers at the Massachusetts Institute of Technology have developed a machine vision system called Dense Object Nets (DON) that enables a computer to intuit how an object works just by looking at it. DON models objects it sees and infers details about an object’s function, such as identifying that a coffee cup needs to be picked up in a way that keeps the bottom of the mug pointing down to avoid spilling its contents.

2. Taming Scientific Literature with AI

Scientists are increasingly relying on AI tools that make it easier to analyze scientific literature, validate hypotheses, and develop new insights from existing research. Over 1 million new research papers are published every year, making it difficult for researchers to stay informed about new developments and find relevant information in their field. An AI tool called Iris.ai can analyze large databases of research papers and identify text that could help answer a researcher’s question. Another tool called Semantic Scholar uses natural language processing to extract relevant information from research papers and determine if the information is both relevant and reputable.

3. AI Learns to Predict Aneurysm Risk

Researchers at Stanford University have developed an AI system that analyzes genetic and electronic health record (EHR) data to predict whether someone has a high risk of developing an abdominal aortic aneurysm (AAA). AAA is an often-fatal cardiovascular disease that rarely shows symptoms, making it difficult to detect. By using AI to analyze just genomic data, the researchers were able to predict whether a person has a high risk of AAA  with 70 percent accuracy. When including EHR data, such as whether a patient smoked, the system could predict AAA risk with 80 percent accuracy.

4. Facebook is Using AI to Understand Memes

Facebook has developed an AI system called Rosetts that uses computer vision and natural language recognition to interpret memes, as well as regular images and video, and potentially flag them as abusive content. Rosetta scans an image to identify and interpret text overlaid on an image, allowing it to spot spam, hate speech, or other prohibited content on Facebook and Instagram. This kind of content often takes the form of memes because they can convey a lot of information with just an image, are easy to share, and have traditionally been difficult for automated moderating tools to interpret.

5. AI Can Hear Depression In Your Voice

Researchers at the Massachusetts Institute of Technology have developed an AI system that can analyze voice recordings and predict if a speaker is depressed, and if so, how severely. The researchers trained their system to recognize speech characteristics common in people with depression, such as breathiness or pitch, and identify patterns that likely indicate whether a person is depressed. In a test, the system could identify whether a person was depressed with 83 percent accuracy.

6. This App Can Help You Figure Out What That Animal Is

A team at the California Academy of Science has developed a smartphone app called iNaturalist that uses AI to identify animal species in users’ photos. iNaturalist began as a crowdsourced initiative in 2008 that allowed users to upload pictures and other users could attempt to identify the animals in them. This process created a database of 5.3 million photos of 117,000 species with labels and location tags that served as training data for the iNaturalist app. When iNaturalist cannot identify a species with a high degree of confidence, it will inform users of what genus it believes the animal is and provide a list of possible species that it could be.

7. AI Can Create New Video Games

Computer scientists at Georgia Tech have developed a machine learning system that can generate novel video games. The researchers first classify every image of a simple video game, known as sprites, in a video of gameplay, including background art, animation frames, and level structure, and then use image analysis to identify when and where sprites occur in a game. Then, by combining this data with information about a game’s rules, they create what they call a “game graph” that represents everything about a video game. Their machine learning system can manipulate and combine game graphs to generate new video games, such as by combining the game graphs of Mega Man and Super Mario Bros.

8. AI Learns to Track Images Over Time

Researchers at the Massachusetts Institute of Technology have developed an AI system called a Temporal Relation network (TRN) that can track objects in video and learn how they change over time. Spatio-temporal modeling is easy for humans but difficult for AI. For example, by examining three video frames—the first a stack of cans, the second a stack of cans with  person’s finger touching the base, and the third a knocked-over stack of cans—humans can easily infer that the finger knocked over the cans. Rather than analyze every image frame, which can be inefficient, TRN identifies key frames to analyze to determine what is happening in a scene, allowing it to run accurately in real-time.

9. AI Isn’t Saying It’s Aliens, but…

A team at the University of California, Berkeley’s search for extraterrestrial intelligence (SETI) research center have used AI to identify 72 radio signal bursts from distant galaxies. The team, called Breakthrough Listen, first identified 21 radio bursts by manually analyzing 400 terabytes of data in 2017. By using a convolutional neural network on this same dataset, the team was able to identify 72 new radio bursts. Breakthrough Listen’s system can detect radio bursts, which some believe could be signs of extraterrestrial life, with 98 percent accuracy.

10. Predicting Radiation Treatments with AI

Researchers at DeepMind and University College London Hospital have developed an AI system that can analyze scans of cancer patients’ heads and help guide radiotherapy treatments with near-human accuracy. Radiotherapy can be an effective treatment for head and neck cancers, however it can damage surrounding tissue unless it is properly isolated—a process known as segmentation. The researchers trained their system on 662 scans of 21 organs in the head and neck, such as the larynx and tongue, and or 19 organs, the system’s performance was within 5 percent of human experts’ segmentation. This approach could eventually reduce the lag time between diagnosis and treatment, and make it easier to adapt radiotherapy as a tumor shrinks to make it safer.

Image: Colby Gutierrez-Kraybill.


Viewing all articles
Browse latest Browse all 1154

Trending Articles