lastroai.substack.com/p/beyond-software-the-rise-of-large
1 Users
0 Comments
16 Highlights
0 Notes
Tags
Top Highlights
While the previous sections explored the impacts of LLMs in specific domains, these models can have an even greater impact when we look at computation itself. Computers started evolving as transistors abstracted away the physical complexity of vacuum tubes and electrons, then operating systems abstracted away the whole hardware stack into software, and finally high-level programming languages made the process of writing software a lot easier.
An abstraction layer is a fundamental concept in computer science, allowing people to create increasingly complex technology while keeping a manageable cognitive load.
The rise of Large Language Models (“LLMs”, such as OpenAI's GPT-4, Google’s PaLM 2, and others) has the potential to be the first technology revolution to create a “multipurpose” abstraction layer, a single technology creating new abstraction layers not in one but in several of the domains above. To reimagine how we interact with computers, write code, connect computers to each other, and even how computers, well, compute.
People initially interacted with early computers through “punched cards” that exactly matched the binary code a machine would process. Then came command line interfaces (think of MS/DOS or Mac’s Terminal) and the graphical user interfaces that replaced them.
The release of the iPhone in 2007 and its touchscreen interface was arguably the latest major breakthrough in user interface
The release of the conversational interface of ChatGPT in November 2022 marked a new breakthrough in the history of user interfaces, and has the potential to redefine the human-machine interaction paradigm. For the first time, computers are able to actually “decode” natural language used by a human and process this input directly
For now, processing it means answering questions, but it will soon mean performing real-world actions such as booking hotel reservations, taking part in brainstorming sessions, or simply ordering groceries in a more convenient way, marking a remarkable upgrade in how easily people can interact with computers.
Traditionally, software development involved a human breaking down a problem, devising a solution, and translating it into code
Developers can now describe to an LLM what a code should accomplish, by providing a high-level description of what should be built, and the model will generate the actual code by itself.
The impacts of these advancements in software development productivity have been remarkable. For instance, a 57% reduction in time needed to complete tasks has been observed in this study by GitHub, allowing developers and teams to deliver projects more efficiently.
As discussed in the User Interface section, LLMs allow computers to communicate with humans using natural language. This capability, however, isn’t limited to a human interface: these models can also be used to enable two computers (or two different applications) to communicate with each other using natural language as well.
This capability can be used to connect real-world applications as well. By leveraging natural language instead of the highly structured protocols of APIs, LLMs can pave the way for applications to communicate with each other in the same way humans interact with them, effectively creating a new abstraction layer for the network stack.
At Lastro, we’re creating a real estate broker application that allows people to browse for properties, get personalized information, schedule visits, and so forth. The application uses different “agents” that communicate through natural language with the customer, with each other, and even with third parties, streamlining the entire process.
Despite all these advancements, our computing paradigm remains a deterministic one. Writing an application in Python still performs the same operations as instructing a computer in assembly language. Theoretically, it could even be rewritten by combining an enormous amount of transistors in the right way.
That is why computers are awesome at processing data and at math tasks, but terrible at performing “probabilistic” tasks easily done by humans, such as communicating thoughts or deciding what actions to take under certainty. Planning a strategy, negotiating with a counterpart, or starting a company, for instance, are all inherently probabilistic endeavors.
LLMs can easily deal with uncertainty by generating a range of potential outcomes, and can represent, for the first time, a major shift from deterministic to probabilistic computing. They have been shown, for example, to have the ability to perform "Chain of Thought Reasoning", breaking down a goal into sub-tasks necessary for its completion and planning on how to perform them. It is as, after being trained on massive datasets of human knowledge, the models start to be capable of emulating human logic.
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.