Works in ChatGPT, Claude, or Any AI
Add semantic quote search to your AI assistant via MCP. One command setup.
" "What is the name for the paranoid feeling that what you just read was LLM generated
Add semantic quote search to your AI assistant via MCP. One command setup.
Related quotes. More quotes will automatically load as you scroll down, or you can use the load more buttons.
Sharing an interesting recent conversation on AI's impact on the economy.
AI has been compared to various historical precedents: electricity, industrial revolution, etc., I think the strongest analogy is that of AI as a new computing paradigm (Software 2.0) because both are fundamentally about the automation of digital information processing.
If you were to forecast the impact of computing on the job market in ~1980s, the most predictive feature of a task/job you'd look at is to what extent the algorithm of it is fixed, i.e. are you just mechanically transforming information according to rote, easy to specify rules (e.g. typing, bookkeeping, human calculators, etc.)? Back then, this was the class of programs that the computing capability of that era allowed us to write (by hand, manually).
With AI now, we are able to write new programs that we could never hope to write by hand before. We do it by specifying objectives (e.g. classification accuracy, reward functions), and we search the program space via gradient descent to find neural networks that work well against that objective. This is my Software 2.0 blog post from a while ago. In this new programming paradigm then, the new most predictive feature to look at is verifiability. If a task/job is verifiable, then it is optimizable directly or via reinforcement learning, and a neural net can be trained to work extremely well. It's about to what extent an AI can "practice" something. The environment has to be resettable (you can start a new attempt), efficient (a lot attempts can be made), and rewardable (there is some automated process to reward any specific attempt that was made).
The more a task/job is verifiable, the more amenable it is to automation in the new programming paradigm. If it is not verifiable, it has to fall out from neural net magic of generalization fingers crossed, or via weaker means like imitation. This is what's driving the "jagged" frontier of progress in LLMs. Tasks that are verifiable progress rapidly, including possibly beyond the ability of top experts (e.g. math, code, amount of time spent watching videos, anything that looks like puzzles with correct answers), while many others lag by comparison (creative, strategic, tasks that combine real-world knowledge, state, context and common sense).
Software 1.0 easily automates what you can specify.
Software 2.0 easily automates what you can verify.
The future expands the variance of human condition a lot more than it drags its mean. This is an empirical observation with interesting extrapolations.
The past is well-approximated as a population of farmers, living similar lives w.r.t. upbringing, knowledge, activities, ideals, aspirations, etc.
The future trends to include all of:
- the transhumanists who "ascend" with neuralinks etc., and the Amish living ~19th century life.
- those who "worship" ideals of religion, technology, knowledge, wealth, fitness, community, nature, art, ...
- those exploring externally into the stars, those exploring internally into minds (drugs++), or those who disappear into digital VR worlds
- those who date a different partner every day and those who are monogamous for life
- those who travel broadly and those who stay in one location their entire life
- those in megacities and those off-the-grid
For almost any question about a dimension of human condition, the answer trends not to any specific thing but to "all of the above". And to an extreme diversity of memetics. At least, this feels like the outcome in free societies that trend to abundance. I don't know what it feels like to live in such a society but it's interesting to think about.
Filter search results by source, date, and more with our premium search tools.
Products with extensive/rich UIs lots of sliders, switches, menus, with no scripting support, and built on opaque, custom, binary formats are ngmi in the era of heavy human+AI collaboration.
If an LLM can't read the underlying representations and manipulate them and all of the related settings via scripting, then it also can't co-pilot your product with existing professionals and it doesn't allow vibe coding for the 100X more aspiring prosumers.
Example high risk (binary objects/artifacts, no text DSL): every Adobe product, DAWs, CAD/3D
Example medium-high risk (already partially text scriptable): Blender, Unity
Example medium-low risk (mostly but not entirely text already, some automation/plugins ecosystem): Excel
Example low risk (already just all text, lucky!): IDEs like VS Code, Figma, Jupyter, Obsidian, ...
AIs will get better and better at human UIUX (Operator and friends), but I suspect the products that attempt to exclusively wait for this future without trying to meet the technology halfway where it is today are not going to have a good time.