Fukoku Mutual life Insurance is turning to artificial intelligence to increase productivity and save money.
The artificial intelligence system is based on IBM’s Watson explorer. The system can analyze massive amounts of data including images, audio, and video. It will be used to read tens of thousands of medical certificates, calculate length of visit or stay, analyze patients medical history, and calculate payouts.
The installation of the artificial intelligence system would replace 34 employees. Other companies such as Dai-Ichi LIfe Insurance and Japan Post Insurance are either interested in AI or have begun implementing AI systems.
Microsoft is opening up Cortana to third party device makers and developers with a Cortana Skills kit and Cortana Devices SDK.
Cortana’s Skills Kit will let developers build apps that are able to use voice commands via Cortana to enable features. The Devices SDK allows third party hardware makers build devices with Cortana on board.
Currently the Skills Kit preview is available for a few partners privately such as Expedia, TalkLocal, and Capitol One. Microsoft is working wit Harman Kardon on a new device with Cortana integration, which looks a lot like the Amazon Echo.
Facebook has been getting a lot of negative attention after it was found out that heavily published fake news articles and information could have led to misinformation about the presidential election.
Facebook in order to combat this are now relying on an AI or algorithmic machine learning to detect fake news and user reports. Facebook turned to AI in order to reduce human bias.
It has now been shown an unregulated news platforms can have negative consequences from the smallest to largest scales. Some are suggesting Facebooks misinformation may have led to Donald Trumps election as the 45th president of the United States of America. Zuckerberg, CEO, has addressed the issue in a statement saying “Of all the content on Facebook, more than 99 percent of what people see is authentic. Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to on partisan view, or even to politics.”
Facebook has had problems in the past and decided to not censor news based on bias. Now, however, it must rethink strategies for an open news platform that is circulating misinformation. Depending on which direction Facebook goes it will have an effect on everyone.
A team of American and British computer scientists have created an AI that is capable of predicting the outcome of human rights trials. The AI analyzed data using almost 600 cases from the European Court of Human Rights and was able to predict the courts judgment with a whopping 79% accuracy.
A study published in the journal PeerJ Computer Science, the AI analyzed 584 cases on prohibition on torture and degrading treatment, the right to a fair trial, and the right to respect for private and family life. The AI Analyzed case descriptions, legal arguments, case history, and related legislation. It then looks for patterns in the data that led to certain judgement such as severity of crimes.
“Results indicate that the ‘facts’ section of a case best predicts the actual court’s decision, which is more consistent with legal realists’ insights about judicial decision-making. We also observe that the topical content of a case is an important indicator whether there is a violation of a given Article of the Convention or not,” the researchers wrote in the study.
“We don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights,” researcher Nikolaos Aletras, UCL, said a statement. “It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights.”
A team of researchers at the university of Leeds have done the unthinkable, they have created a chat bot or digital character of Joey Tribbiani from Friends.
The team, James Charles, Derek Magee, and Dave Hogg from Leeds University computer science department in the UK, is building a series of algorithms that are able to understand and track individual body language, facial expressions, voice, and characteristics. A machine learning tool is used to understand how the character forms sentences by reading the show’s script as shown by Prosthetic Knowledge.
While the chat bot is not highly advanced, with some major differences that set it apart from the actual character, this has lead to an interesting concept for future chat bots and AI. Advancement in this technology could lead to chat bots and AIs that have similar, almost identical, characteristics and voices of people, whether fictional or real.
In a paper published by the team, they have said that “We plan to improve the rendering of the avatar and extend our model to include interaction with real people and also between avatars.”
Pittsburgh is testing AI traffic lights to reduce traffic, made by Surtrac. The smart traffic management system gave amazing results.
The team started implementing the AI traffic control system at 9 intersections in Pittsburgh’s East Liberty neighborhood in 2012. It later began to expand citywide and now spans 50 intersections.
The system reduced travel time by 25 percent and idling time by over 40 percent. The researchers estimate that the system cuts emissions by 21 percent. Stephen Smith, Carnegie Mellon University professor of robotics and founder of startup Surtrac, stated at the White House Frontiers Conference that Traffic congestion costs the U.S. economy $121 billion a year, mostly due to lost productivity, and produces about 25 billion kilograms of carbon dioxide emissions.
Conventional traffic lights have programmed timing, which are not updated regularly. Surtrac developed smart artificial intelligence traffic signals that adapt to the change in traffic conditions. The system relies coordinating traffic lights. Radar sensors and cameras are attached to each light detect the flow of traffic. AI algorithms then take the data and build a timing plan that according to Smith “that moves all the vehicles it knows about through the intersection in the most efficient way possible.” Smith says each signal makes its own timing decision and is not centralized.
The next step is to have the system talk to cars. The team has installed short ranged radios at 24 intersections and in the future could let drivers know of traffic conditions ahead of them or know which lights are about to change.
Microsofts speech recognition team hit a major milestone that makes speech recognition as good as humans when hearing people speak. In a paper published Monday, researchers and engineers in Microsofts Artificial Intelligence and Research reported a speech recognition system that has a word error rate of 5.9 percent, which is on par with humans that professionally transcribe conversations. “We’ve reached human parity,” said Xuedong Huang, the company’s chief speech scientist. “This is an historic achievement,”in a Microsoft blog post.
“Our progress is a result of the careful engineering and optimization of convolutional and recurrent neural networks,” reads the paper. “These acoustic models have the ability to model a large amount of acoustic context.”
While the system can’t hear as well as humans in all situations or environments, it’s still a major improvement that may show up in future Microsoft products.
Google’s DeepMind, which has been developing intelligent computers has created a way for AI to mimic human speech.
DeepMind, which was acquired by Google in 2014, developed an AI called WaveNet that has the ability to mimic human speech by learning to create individual sound waves, according to its blog post. The company conducted blind tests using English (U.S) and Mandarin (Chinese) on human listeners that found WaveNet sounded more natural than previous or other technologies.
Current speech programs use recordings from a single human speaker then use the recordings to allow the program to speak, which is why it doesn’t sound so natural. With WaveNet its different because it doesn’t necessarily rely on humans to record every single word. WaveNet is a neural network AI that is designed to mimic parts of the human brain function, however it requires large data sets.
According to the blog post, the audio signal has to be sampled 16,000 times per second or more then it has to form predictions on the sample about what the sound wave should look like from other samples, which is challenging.
This is an interesting development in AI human speech. Since many companies have developed speech capable AI such as Microsoft with Cortana, Apple with Siri, and Amazons Alexa, having an AI capable of producing more naturally sounding speech would greatly benefit the users. Humans are also interacting more with AI, either if its using Siri, Cortana, Google Now, or Alexa, AI have steadily been increasing their interaction with humans.
Qihan showed off its mesmerizing humanoid robot: the Sanbot at IFA 2016, Europe’s biggest tech show.
Sanbot has flipper arms, a pair of wheels to move about, 3 cameras for security, a 3D camera for spatial awareness, and a HD camera, a touchscreen tablet, and infrared sensors. the back of its head has a built in HD projector and its torso houses speaker grilles and a sub-woofer. It can also recognize faces and voices. It even knows when its time to recharge. The Sanbot can be controlled using an Android or iOS app. It appears to be a personable humanoid, it looks friendly, obviously cute, and harmless.
It has many uses for the home, shops, hospitals, schools, and many other places. It was made to meet human needs, which there are many. For example the Sanbot at the Shenzhen airport in China, which provides passengers with flight information. Whats unique is that the company allows for developers and companies to use publicly available tools to create new roles for the humanoid.
Its maker Qihan, is known for home surveillance. According to the company the Sanbot has been stationed at various places around China such as airports. The humanoid costs around $6,000 (45,000 yuan), which is astonishingly cheap. The company has reportedly shipped around 30,000 of these little humanoids.
Love AI and beer? IntelligentX Brewing Company does. The brewery has taken to using an AI to perfect its recipes.
IntelligentX Brewing Company is a partnership between Intelligent Layer, a machine learning company, and creative agency 10X. The two companies have created algorithms to process customer feedback in an effort to improve their beers.
The AI is hidden behind a Facebook Messenger bot that asks a series of questions about beer. Questions can be giving ratings between 1 to 10, yes or no, and multiple choice responses. The responses are then taken and used to enhance taste and preference of beers over time. Codes are printed on to the bottles, which direct drinkers to the bot. The AI uses reinforcement learning to enhance the way it asks questions and to get better answers in the future for more insight. This also allows consumers to in a way step into the kitchen and work with the chef to enhance the product, or in this case the beers.
IntelligentX says its beers Golden, Amber, Pale and Black have changed about 11 times during a 12 month trial period.
This system can also be used to identify trends in beer and when to offer the best beer, since trends can change erratically this is an ingenious way to track them.