www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/
1 Users
0 Comments
18 Highlights
1 Notes
Tags
Top Highlights
A “Saint” could be a deep learning model that seems to perform well because it has exactly the goals we’d like it to have. A “Sycophant” could be a model that seems to perform well because it seeks short-term approval in ways that aren’t good in the long run. And a “Schemer” could be a model that seems to perform well because performing well during training will give it more opportunities to pursue its own goals later.
Sometimes they’ll unintentionally give high approval to bad behavior because it superficially seems good.
unclear whether this will cause Sycophant models to a) become Saint models that correct our errors for us, or b) just learn to cover their tracks better.
Schemer models These models develop some goal that is correlated with, but not the same as, human approval; they may then pretend to be motivated by human approval during training so that they can pursue this other goal more effectively.
Sycophant models These models very literally and single-mindedly pursue human approval.
SGD will select for this kind of awareness. This is because developing an accurate picture of what’s broadly going on in the world -- including that it has humans in it who are trying to train AI systems
Schemers don’t need to make sure that everything always looks good to humans, because they don’t actually care about that. They only need to cater to humans while they are directly under human control. Once a Schemer model calculates that it could win in a conflict against humans, there would be nothing to stop it from flat-out refusing orders and openly pursuing its goal. And if it does this, it may use violence to prevent humans from stopping it.
Optimists tend to think it’s likely that advanced deep learning models won’t actually have “goals” at all
Pessimists tend to think that it’s likely that having long-term goals and creatively optimizing for them will be heavily selected for because that’s a very simple and “natural” way to get strong performance on many complex tasks.
optimists tend to think that the easiest thing for SGD to find which performs well (e.g. gets high approval) is pretty likely to roughly embody the intended spirit of what we wanted
Pessimists tend to think that the easiest thing for SGD to find is a Schemer, and Saints are particularly “unnatural”
Optimists tend to think that we can provide models incentives to supervise each other.
Sycophants could help us detect Schemers and other Sycophants.
Once all the Schemers are collectively more powerful than humans, they think it’ll make more sense for them to cooperate with each other to get more of what they all want than to help humans by keeping each other in check.
Optimists tend to expect that there will be many opportunities to experiment on nearer-term challenges analogous to the problem of aligning powerful models,
Pessimists often believe we will have very few opportunities to practice solving the most difficult aspects of the alignment problem
Optimists tend to think that people would be unlikely to train or deploy models that have a significant chance of being misaligned.
Pessimists expect the benefits of using these models would be tremendous, such that eventually companies or countries that use them would very easily economically and/or militarily outcompete ones who don’t. They think that “getting advanced AI before the other company/country” will feel extremely urgent and important, while misalignment risk will feel speculative and remote (even when it’s really serious).
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.