Key Companies Generating Ai Hardware

  • At Google, we think that AI can meaningfully improve people’s lives and that the biggest impact will come when everyone can access it. Learn more about our projects and tools.
  • Namelix uses artificial intelligence to create a short, brandable business name. Search for domain availability, and instantly generate a logo for your new business Business Name Generator - free AI-powered naming tool - Namelix.

Dec 18, 2017  CPUs, GPUs, TPUs (tensor processing units), and even FPGAs. It's hard to tell who started the fight over artificial intelligence (AI), and it's too soon to tell who will finish it. But 2018 will be the start of what could be a longstanding battle between chipmakers to determine who creates the hardware that AI. Nov 16, 2017  Google's various projects actually paint a very clear picture of its plans for AI and machine learning. This is the future, according to Google. But what does it mean for you? And will Google be. Jul 17, 2019  Mobility, industrial Internet of Things (IoT), cloud and security will be the key features of the next-generation artificial intelligence (AI) and edge computing eco-systems, according to experts of US-based tech giant Cisco.

BEIJING--(BUSINESS WIRE)--Today, Bitmain announced its first products for accelerating artificial intelligence (AI) applications, applying its industry-leading expertise in Bitcoin mining hardware and system design.

The products are:

  • The BM1680, a customized tensor computing ASIC (Application Specific Integrated Circuit), which is optimized for multiple types of inference and training functions for deep learning networks
  • The SC1, an advanced fan-cooled module that combines the BM1680 into a compact, easy-to-integrate package

Bitmain’s new line of AI hardware products delivers superior performance and optimized costs when compared to traditional implementations using graphics processing units (GPUs). Bitmain’s Sophon hardware can be applied in a variety of industries and use cases including image and speech recognition, autonomous vehicle technology, enhanced security camera surveillance, robotics, the Internet of Things (IoT), and many other AI applications.

“Deep learning is very intensive computationally and our experience in creating high-performing hardware for Bitcoin has absolutely prepared us for this exciting area of computing,” notes Micree Zhan, Bitmain’s CEO. “AI hardware is an area that Bitmain is proactively developing to power the next generation of AI applications.”

Mr. Zhan further explained, “Bitmain saw trends in the AI business that were similar to the early days of Bitcoin, and so we started to explore AI toward the end of 2015. Now after only a year and a half, we have the mass-production chips in hand.”

The BM1680 and SC1 feature full compatibility with popular AI platforms, including mainstream Caffe, Darknet, Googlenet, VGG, Resnet, Yolo, Yoto2 and other models, making them simple for developers to use. Bitmain has demonstrated its hardware’s capabilities in motor vehicles, human/object detection, and face recognition and has also completed the training for Alexnet, Googlenet, VGG, Resnet and other mainstream AI networks.

Micree Zhan will deliver a keynote presentation entitled “Planet-Scale Computing Driven by AI” at the AI Cloud Forum at AIWORLD 2017 in Beijing on November 8. It will include discussion of the BM1680’s key features.

Price and Availability

Samples of the SC1 module are available immediately and can be purchased from https://www.sophon.ai/product/sc1.html.

About Bitmain

Founded in 2013, Bitmain Technologies, described as the world's most valuable bitcoin company, was established to develop and sell the world's leading bitcoin miners using Bitmain's ASIC chip technology. Bitmain is now among the most recognizable companies in the cryptocurrency space and the proud parent of several brands, among them Antminer, Antpool, BTC.COM, Hashnest, and Sophon, all of which are among the leading products in their respective fields. Bitmain's machines and customers are present in more than 100 countries across the globe. Bitmain remains devoted to the production of high quality and efficient computing chips, high density server equipment, and large scale parallel computing software. The company is proudly headquartered in Beijing, with offices in Amsterdam, Hong Kong, Tel Aviv, Qingdao, Chengdu and Shenzhen.

A few days ago, Facebook open-sourced its artificial intelligence (AI) hardware computing design. Bluebeam revu download for mac. Most people don’t know that large companies such as Facebook, Google, and Amazon don’t buy hardware from the usual large computer suppliers like Dell, HP, and IBM but instead design their own hardware based on commodity components. The Facebook website and all its myriad apps and subsystems persist on a cloud infrastructure constructed from tens of thousands of computers designed from scratch by Facebook’s own hardware engineers.

Open-sourcing Facebook’s AI hardware means that deep learning has graduated from the Facebook Artificial Intelligence Research (FAIR) lab into Facebook’s mainstream production systems intended to run apps created by its product development teams. If Facebook software developers are to build deep-learning systems for users, a standard hardware module optimised for fast deep learning execution that fits into and scales with Facebook’s data centres needs to be designed, competitively procured, and deployed. The module, called Big Sur, looks like any rack mounted commodity hardware unit found in any large cloud data centre.

Key Companies Generating Ai Hardware Online

But Big Sur differs from the other data centre hardware modules that serve Facebook’s browser and smartphone newsfeed in one significant way: it is built around the Nvidia Tesla M40 GPU. Up to eight Nvidia Tesla M40 cards like the one pictured to the right can be squeezed into a single Big Sur chassis. Each Nvidia Telsa M40 card has 3072 cores and 12GB of memory.

While GPUs were obviously first used for rendering graphics, in recent years they've been embraced as the poor man’s supercomputer: the large number of cores can be incredibly effective in parallel processing problems such as decrypting passwords or scientific applications like machine learning, as depicted in the benchmark below.

By design, none of the Big Sur components are unique. Three years ago Facebook launched the independent Open Compute Project with other large cloud computing-centric companies such as Microsoft. The plan was to extend its open-source software strategy to hardware, to gain the advantages of open research and development and the economic advantage of large-scale manufacturing by combining its purchasing volume with that of other large cloud infrastructure companies. Facebook has announced that it will be submitting the Big Sur design to the Open Compute Project.

In the same announcement, Facebook also said that “[We have] a culture of support for open source software and hardware, and FAIR has continued that commitment by open-sourcing our code and publishing our discoveries as academic papers freely available from open-access sites . We want to make it a lot easier for AI researchers to share techniques and technologies.”

Something deep this way comes

During the last month Google, Microsoft, and IBM all released open-source machine learning projects. Facebook cited the Torch project as an example of its commitment to open-source deep learning software. Torch is a scientific computing framework that includes machine learning libraries optimised for neural networks based on the Lua programming language. Many of the top companies like Google, Facebook, Twitter, and IBM share research and software development through the Torch project.

Wired’s report that Facebook’s open-sourcing of Big Sur was intended to flank Google’s significant deep learning initiatives is contradicted by the cooperation between Google, Facebook, and other top names researching deep learning. These companies may add machine learning features into proprietary applications that are commercial competitors, but they also collaborate on creating the tools that are being used to build these proprietary apps in the first place.

The cooperation is akin to the cooperation between many of the same large platform company competitors who collaborated for more than a decade on the open-source Hadoop framework that propelled big data predictive analytics from academia and research labs into mainstream use.

All of these companies are trying to solve similar problems. Facebook M, for example, among other things, can use deep learning to answer questions about the contents of an image. Below you can see a video of M being demonstrated by Facebook’s AI chief Yann LeCun at MIT EmTech in November. The video also neatly delineates the possible application of artificial computer intelligence and human interaction based on deep learning. Mathematica 11.2 key generator.

We are stronger together

The history of deep learning has been one of cooperation, and it appears that for the time being it will remain so. The acceleration of research through the network effect of shared open projects at this early stage of commercialisation outweighs proprietary development. For more than a decade, the technical leadership of deep learning and neural network research has been driven by academics; two of the most notable are Facebook’s LeCun and Google’s head of AI and deep learning Geoffrey Hinton. The relationship stretches back to LeCun’s work at the University of Toronto as Hinton’s postdoctoral research associate.

During his talk at MIT EmTech, LeCun explained that for more than a decade when he was on the faculty of NYU, he and Hinton (then on the faculty of the University of Toronto), Yoshua Bengio (of the University of Montreal), and Andrew Ng (then on the faculty of Stanford, now Chief Scientist at Baidu Research and formerly of Google) collaborated on deep learning. After neural networks fell out of favour their collaboration, once referred to as the deep learning conspiracy, kept this field of research alive throughout the period of unpopularity. He credits the recent successes of deep learning to the increase in compute speed and the availability of training data. He also credits Hinton’s student Alex Krizhevsky for programming GPUs to solve deep learning problems.

The availability of training data sets looks like it will go from good to better, too. OpenAI, a new non-profit artificial intelligence research company, was founded on Friday with up to £660 million ($1 billion) in funding from a group of Silicon Valley billionaires that includes Elon Musk and Peter Thiel. It will be led by Ilya Sutskever who studied under Hinton at the University of Toronto, worked at Google Brain, and worked under Ng as a post-doctoral researcher. The goal of OpenAI is to advance digital intelligence in the way that is most likely to benefit humanity. OpenAI takes a new approach to AI by sharing deep learning training data sets, the raw material currently required to create artificial intelligence.

List Of Ai Companies

Deep learning is back, baby

Interest in deep learning is exploding, though it is still a very academic field. The board and organising committees of the premier annual AI/deep learning event, the Neural Information Processing Systems Foundation (NIPS) annual conference, are almost exclusively from universities and research institutes. Only a few companies turn up, such as Google, Facebook, and IBM.

University of Sheffield CS Professor Neil Lawrence compiled registration data from the last NIPS conference published on Facebook that illustrated that deep learning and neural networks have reached a tipping point.

Growth in the size of the NIPS conference, increased investments by tech industry leaders, and the growing base of open-source hardware and software are good measures of the progress of AI and deep learning.

Companies Developing Ai

Though these tools will be used to add machine learning features into proprietary applications to create differentiated user experiences, much of the progress will continue to be made in academia motivating continued academic and commercial cooperation in tool building. Cooperation will also identify the next prodigies to follow LeCun and Hinton.

Key Companies Generating Ai Hardware Locations

Steven Max Patterson lives in Boston and San Francisco following trends in software development platforms, mobile, IoT, wearables and next generation television. His writing is influenced by his 20 years experience covering or working in the primordial ooze of tech startups. You can find him on Twitter at @stevep2007.