Machine learning has grown in importance and uptake in recent years as more organizations seek to employ this technology for faster and more efficient decision making.
Indeed, that’s why Apple recently announced plans to buy Turi Inc., why Intel (News - Alert) purchased Nervana Systems, and why Salesforce snapped up MetaMind and PredictionIO. It’s also why such tech leaders as Amazon, Google, and IBM all offer machine learning capabilities as part of their cloud services.
In August, Apple (News - Alert) revealed that it had purchased Turi for about $200 million. That move is part of a broader battle among Amazon, Facebook, and Google to gain an edge in AI, particularly pervasive computing, for which software infers what people want, notes a Bloomberg (News - Alert) article which suggests that Turi’s technology could be used in concert with Apple’s Siri digital assistant.
That ties into the idea of chatbots, which can help provide people and businesses with a range of information and assistance. Chatbots can automate processes, and can be smart and poll users about what they want, and then can come back to them with those things. Facebook earlier this year talked about chatbots during its introduction of Messenger Platform, and customer service and unified communications such as Aspect (News - Alert) (with its Mila chatbot) leverage AI to help users.
Also in August, Intel bought deep learning company Nervana Systems for more than $350 million.
“While Intel has a running business in high-performance computing, it has taken a back seat to Nvidia, another HPC supplier, when it comes to creating chips for deep learning, a trendy type of artificial intelligence that involves training artificial neural networks on lots of data and then getting them to make inferences on new data,” VentureBeat reported. “Google (News - Alert) has deployed competing chips named tensor processing units (TPUs) that can handle Google’s TensorFlow open source deep learning framework. It’s unclear if Google has joined with Intel in that affair. In any case, this move represents a clear signal that Intel is determined to gain ground in the world of AI.”
Diane Bryant, the executive vice president and general manager of the data center group at Intel, in an Aug. 9 blog on the Intel website talked about artificial intelligence, machine learning, and deep learning, and their various applications.
“Encompassing compute methods like advanced data analytics, computer vision, natural language processing, and machine learning, artificial intelligence is transforming the way businesses operate and how people engage with the world,” Bryant wrote. “Machine learning, and its subset deep learning, are key methods for the expanding field of AI. Intel processors power more than 97 percent of servers deployed to support machine learning workloads today. The Intel Xeon processor E5 family is the most widely deployed processor for deep learning inference and the recently launched Intel Xeon Phi processor delivers the scalable performance needed for deep learning training. While less than 10 percent of servers worldwide were deployed in support of machine learning last year, the capabilities and insights it enables makes machine learning the fastest growing form of AI.”
Nimbix is another company catering to organizations that want to use machine learning. Its platform, called JARVICE, is based on Nvidia’s Tesla K80 graphics processing units and allows organizations to run large-scale high-performance computing workloads in the cloud. JARVICE, which Nimbix charges for on a per-second basis, reduces developer time to deployment from weeks to hours.
“Cloud computing can be a highly effective and cost efficient way to free up space for HPC projects,” according to a June Nimbix blog. “Utilizing a high performance cloud for your organization’s HPC projects allows you to stay up to date and on top of the fast changing applications and infrastructure.”
The blog said the solution provides a scalable environment that is flexible and secure, and is an easy and efficient way to manage and monitor work flows. It also features a simplified reporting tool for tracking consumption and costs, said Nimbix, which in May was named among the Cool Vendors for Compute Platforms by Gartner Inc.
According to the Nvidia website: “Data scientists in both industry and academia have been using GPUs for machine learning to make groundbreaking improvements across a variety of applications including image classification, video analytics, speech recognition, and natural language processing.”
They have been particularly popular for use in deep learning environments, says Nvidia. That involves the use of sophisticated, multi-level deep neural networks to create systems that can perform feature detection from massive amounts of unlabeled training data.
Nvidia adds that while machine learning is not a new idea, the massive amount of training data now available paired with GPU computing that now allows for powerful and efficient parallel computing have lowered the barriers and greatly increased the uptake for this technology.
“Early adopters of GPU accelerators for machine learning include many of the largest web and social media companies, along with top-tier research institutions in data science and machine learning,” says Nvidia. “With thousands of computational cores and 10-100x application throughput compared to CPUs alone, GPUs have become the processor of choice for processing big data for data scientists.”
Edited by Alicia Young