www.nytimes.com/2023/10/18/technology/how-ai-works-stanford.html
1 Users
0 Comments
8 Highlights
0 Notes
Tags
Top Highlights
Transparency is particularly important now, as models grow more powerful and millions of people incorporate A.I. tools into their daily lives. Knowing more about how these systems work would give regulators, researchers and users a better understanding of what they’re dealing with, and allow them to ask better questions of behind the models.
These firms generally don’t release about what data was used to their models, or what hardware they use to run them.
There are user manuals for A.I. systems, and list of everything these systems are capable of doing, or what of testing have gone into them.
And while some A.I. models have been made open-source — meaning their code is given away for free — the public still doesn’t know much about the process of creating them, or what happens after they’re released.
I generally hear one of three common responses from A.I. executives when I ask them why they don’t share more information about their models publicly.
The is lawsuits.
The second common response is .
The third response I often hear is .
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.