The future expands the variance of human condition a lot more than it drags its mean. This is an empirical observation with interesting extrapolation… - Andrej Karpathy

" "

The future expands the variance of human condition a lot more than it drags its mean. This is an empirical observation with interesting extrapolations.

The past is well-approximated as a population of farmers, living similar lives w.r.t. upbringing, knowledge, activities, ideals, aspirations, etc.

The future trends to include all of:
- the transhumanists who "ascend" with neuralinks etc., and the Amish living ~19th century life.
- those who "worship" ideals of religion, technology, knowledge, wealth, fitness, community, nature, art, ...
- those exploring externally into the stars, those exploring internally into minds (drugs++), or those who disappear into digital VR worlds
- those who date a different partner every day and those who are monogamous for life
- those who travel broadly and those who stay in one location their entire life
- those in megacities and those off-the-grid

For almost any question about a dimension of human condition, the answer trends not to any specific thing but to "all of the above". And to an extreme diversity of memetics. At least, this feels like the outcome in free societies that trend to abundance. I don't know what it feels like to live in such a society but it's interesting to think about.

English
Collect this quote
Works in ChatGPT, Claude, or Any AI

Add semantic quote search to your AI assistant via MCP. One command setup.

Related quotes. More quotes will automatically load as you scroll down, or you can use the load more buttons.

Additional quotes by Andrej Karpathy

Don't think of LLMs as entities but as simulators. For example, when exploring a topic, don't ask:

"What do you think about xyz"?

There is no "you". Next time try:

"What would be a good group of people to explore xyz? What would they say?"

The LLM can channel/simulate many perspectives but it hasn't "thought about" xyz for a while and over time and formed its own opinions in the way we're used to. If you force it via the use of "you", it will give you something by adopting a personality embedding vector implied by the statistics of its finetuning data and then simulate that. It's fine to do, but there is a lot less mystique to it than I find people naively attribute to "asking an AI".

Loading...