Reference Quote

Shuffle

Similar Quotes

Quote search results. More quotes will automatically load as you scroll down, or you can use the load more buttons.

PREMIUM FEATURE
Advanced Search Filters

Filter search results by source, date, and more with our premium search tools.

Within a period of 20 years, AI has gone from “it’s an extremely human move” described by world chess champion Garry Kasparov in 1996 to “it’s not a human move” opined by Go grandmaster Fan Hui in 2016. There will come a time when humans can no longer explain the decision-making process of a superintelligent computer.

Sharing an interesting recent conversation on AI's impact on the economy.

AI has been compared to various historical precedents: electricity, industrial revolution, etc., I think the strongest analogy is that of AI as a new computing paradigm (Software 2.0) because both are fundamentally about the automation of digital information processing.

If you were to forecast the impact of computing on the job market in ~1980s, the most predictive feature of a task/job you'd look at is to what extent the algorithm of it is fixed, i.e. are you just mechanically transforming information according to rote, easy to specify rules (e.g. typing, bookkeeping, human calculators, etc.)? Back then, this was the class of programs that the computing capability of that era allowed us to write (by hand, manually).

With AI now, we are able to write new programs that we could never hope to write by hand before. We do it by specifying objectives (e.g. classification accuracy, reward functions), and we search the program space via gradient descent to find neural networks that work well against that objective. This is my Software 2.0 blog post from a while ago. In this new programming paradigm then, the new most predictive feature to look at is verifiability. If a task/job is verifiable, then it is optimizable directly or via reinforcement learning, and a neural net can be trained to work extremely well. It's about to what extent an AI can "practice" something. The environment has to be resettable (you can start a new attempt), efficient (a lot attempts can be made), and rewardable (there is some automated process to reward any specific attempt that was made).

The more a task/job is verifiable, the more amenable it is to automation in the new programming paradigm. If it is not verifiable, it has to fall out from neural net magic of generalization fingers crossed, or via weaker means like imitation. This is what's driving the "jagged" frontier of progress in LLMs. Tasks that are verifiable progress rapidly, including possibly beyond the ability of top experts (e.g. math, code, amount of time spent watching videos, anything that looks like puzzles with correct answers), while many others lag by comparison (creative, strategic, tasks that combine real-world knowledge, state, context and common sense).

Software 1.0 easily automates what you can specify.
Software 2.0 easily automates what you can verify.

We have come through a strange cycle in programming, starting with the creation of programming itself as a human activity. Executives with the tiniest smattering of knowledge assume that anyone can write a program, and only now are programmers beginning to win their battle for recognition as true professionals.

Go Premium

Support Quotewise while enjoying an ad-free experience and premium features.

View Plans
Share Your Favorite Quotes

Know a quote that's missing? Help grow our collection.

All great programmers learn the same way. They poke the box. They code something and see what the computer does. They change it and see what the computer does. They repeat the process again and again until they figure out how the box works.

When I launched my AI career in 1983, I did so by waxing philosophic in my application to the Ph.D. program at Carnegie Mellon. I described AI as “the quantification of the human thinking process, the explication of human behavior,” and our “final step” to understanding ourselves. It was a succinct distillation of the romantic notions in the field at that time and one that inspired me as I pushed the bounds of AI capabilities and human knowledge.
Today, thirty-five years older and hopefully a bit wiser, I see things differently. The AI programs that we’ve created have proven capable of mimicking and surpassing human brains at many tasks. As a researcher and scientist, I’m proud of these accomplishments. But if the original goal was to truly understand myself and other human beings, then these decades of “progress” got me nowhere. In effect, I got my sense of anatomy mixed up. Instead of seeking to outperform the human brain, I should have sought to understand the human heart.
It’s a lesson that it took me far too long to learn. I have spent much of my adult life obsessively working to optimize my impact, to turn my brain into a finely tuned algorithm for maximizing my own influence. I bounced between countries and worked across time zones for that purpose, never realizing that something far more meaningful and far more human lay in the hearts of the family members, friends, and loved ones who surrounded me. It took a cancer diagnosis and the unselfish love of my family for me to finally connect all these dots into a clearer picture of what separates us from the machines we build.
That process changed my life, and in a roundabout way has led me back to my original goal of using AI to reveal our nature as human beings. If AI ever allows us to truly understand ourselves, it will not be because these algorithms captured the mechanical essence of the human mind. It will be because they liberated us to forget about optimizations and to instead focus on what truly makes us

Progress is possible only if we train ourselves to think about programs without thinking of them as pieces of executable code.

Sometimes it seems as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.

Loading more quotes...

Loading...