This Is How Nvidia Shapes the Future of Artificial Intelligence

It all started over a cup of coffee… Three guys sitting in a dingy diner of East San Jose planned to devote their lives in creating specialized chips. Chips that can revolutionize the world of video games using enhanced image processing and producing more realistic graphical results.

Back in 1993, Mr. Chris Malachowsky (The founder of Nvidia Corporation) with Curtis Priem and Nvidia’s current CEO, Jen Hsun Huang forecasted the future waves making ripples in the tides of time. They analyzed the fore coming of a budding technology, the graphic processor units (GPUs) that you use as graphic interfacing cards and are widely sold across the world to fulfil an individual’s gaming desires.

Two decades later… Nvidia net worth reached to an alarming rate of $5 billion. Despite more than 63% of the overall revenue of Nvidia is dependent on these video chips! It still isn’t the reason which lead Wall Street go all salivating over the firm.

While resolving the enigmatic serendipities of scaling up and improvising high definition graphical landscapes, only did the engineers behind Nvidia found out that the same exact technology can be used to ignite the deep cores of Artificial Intelligence. The concept of “Deep Learning.”

The process of deep learning enables technology to learn by itself. It eradicates the hassles of a programmer to code every specific details and avoid different levels of inaccuracy. The technique implies the similar concept on which the brain functions using neurons and synapses.

Historically, the concept’s been around for quite some time; however, it couldn’t come into existence because of the lack of two things;

  • Inadequate amount of data to train algorithms
  • Inadequate access to computing horsepower

With the advent of the Internet, a part of the query resolved because now large data was on fingertips. Nevertheless, computing power was still a long way to go.

It was year 2006 when Nvidia announced a new programming toolkit called CUDA. It has the power to simulate thousands of tiny computers that can render each individual pixel on the screen. CUDA minimized the low-level math applied in the background to render shadows, reflections, lighting and transparency. Experts put years of hard work in designing a program that can enable developers to use high level languages such as Java or C++ for coding GPUs. They realized it’s the cheapest and easiest way to code Nvidia chips.

According to Mr. Huang,

“Deep learning is almost like the brain,” Huang says. “It’s unreasonably effective. You can teach it to do almost anything. But it had a huge handicap: It needs a massive amount of computation. And there we were with the GPU, a computing model almost ideal for deep learning.”

This technology is one of the reasons why Google, Microsoft, Facebook and Amazon are eagerly looking forward to purchase Nvidia chips in large quantity for their data centers. Tesla announced that they will soon adapt this new technology and integrate it with their cars to enable autonomous driving. It is already being used in General Hospitals in Massachusetts for medical imaging of anomalies and performing CT Scans. Smart Nvidia chips are the next generation tools to make machines AI interactive.

“Deep learning can increase chances of finding anomalies in cells that lead to cancer”

Till 2008, the concept of deep learning was not popular enough, people relied more on algorithm tricks. It was later on in 2010, that it became a popular talk of the dinner tables in a Japanese Restaurant at Palo Alto, when two eager scientists pitched Google’s then CEO, Larry Page to manifest a deep learning research group in their respective organization. Who knew the idea would lay the foundations of the engine behind Google Brain? Today, every google product, specifically the Search Engine itself, is working on the similar AI mimicked principle.

Watching the immense boost of Google on the market, the concept of deep learning became the new talk of the town. Separate individual giants such as Microsoft, Facebook and Amazon launched their own research divisions and within no time, it proved that Nvidia’s investment in underlying software eco-system, CUDA was not after all another flock in the stock!

The concept has gained considerable fame in the world of cloud computing and among data centers. According to Nvidia Insights, a large portion of customers are coming in from one of the top players such as Tencent, IBM and Baidu. They have recently adopted GPUs with smart chip system in their cloud services.

According to Forbes,

“The first quarter earnings published reports that the quarterly revenue of Nvidia scales up to $1.94 billion, up 48% year over year, and earning 79 cents per share, up 126% a year ago.”

Getting such numbers in result, Nvidia aims to train more than 100,000 developers this year through a programme they term as Deep Learning Institute. The aim is to create developers, researchers and data scientists equipped with the adequate amount of knowledge on AI tools and technology. They have set up 14 labs and with a mindset of tenfold increase in the forthcoming years.

Artificial intelligence, today is now in its pristine splendour and more emerging names such as Intel is coming to the forefront. A startup in Nervana led Intel to invest more than $400 million just on manufacturing AI chips that can parallel the Nvidia technology.

When inquired about such rising competitions in the market, experts at Nvidia replied,

“Unlike competitors, Nvidia doesn’t need to make big acquisitions to play the game and it now common for new customers to call on Nvidia for help”