Jump to content

Three Breakthroughs that have unleasied AI on the world


EdEarl

Recommended Posts

 

Wired.com

 

Today’s Watson is very different. It no longer exists solely within a wall of cabinets but is spread across a cloud of open-standard servers that run several hundred “instances” of the AI at once. Like all things cloudy, Watson is served to simultaneous customers anywhere in the world, who can access it using their phones, their desktops, or their own data servers. This kind of AI can be scaled up or down on demand. Because AI improves as people use it, Watson is always getting smarter; anything it learns in one instance can be immediately transferred to the others. And instead of one single program, it’s an aggregation of diverse software engines—its logic-deduction engine and its language-parsing engine might operate on different code, on different chips, in different locations—all cleverly integrated into a unified stream of intelligence.

 

The Breakthroughs

  1. Cheap Parallel Computation
  2. Big Data
  3. Better AI algorithms

AI Everywhere

  1. Google
  2. Auto driving
  3. Financial Predictions
  4. others in development, $B for AI

 

AI - Brain

  1. 1977: Deep Blue vs Kasparov
  2. Today: AI plus Kasparov vs AI

Tomorrow, everyone is AI augmented, even more than carrying a cell phone.

Will the Unified Theory be discovered by AI plus physicists? What is required?

 

Link to comment
Share on other sites

  • 3 weeks later...

Given charlatan reviewers and range of paper quality, each scientist might use a Watson assistant to read and rate papers. Its owner would train the AI by having it read papers with relevant information; then let it read the general literature to select papers for the scientist to read.

Edited by EdEarl
Link to comment
Share on other sites

  • 1 month later...

 

The Breakthroughs

  1. Better AI algorithms

 

Not really better algorithms... What runs presently is still the gradient retropropagation (Yann Lecun about 1987) and the random annealing of neural network coefficients (few years later). It's mainly that computers are faster now, and by chance such algorithms fit parallel computers not too badly.

 

You know the proverb: when programming, you can be clever and fast, or you can be dumb and slow so that hardware progress catches up you bad software. But you shouldn't be dumb and fast.

 

(Hum, Yann isn't dumb for sure. Impressive in fact. But I can't resist a silly joke.)

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.