Back

Oct 11, 2024

Oct 11, 2024

Exploring Human AI Interaction: Collaboration in the Digital Age

As AI systems become more human-like, they will become collaborators rather than tools, and bringing them into service will be more like onboarding a new employee.

An open laptop with code on it, next to a plant and a pair of sunglasses.
An open laptop with code on it, next to a plant and a pair of sunglasses.
An open laptop with code on it, next to a plant and a pair of sunglasses.

Research on the design of intelligent user interfaces for AI and human interaction is an established topic. Researchers and practitioners meet annually to discuss the latest advances in Human-Computer Interaction (HCI) and Artificial Intelligence (AI) at the Conference on Intelligent User Interfaces (ACM IUI). Human-AI interaction has become a major focus for companies and researchers.

The ACM IUI conference grew out of a workshop in 1988 with the title "Architectures for Intelligent Interfaces". The workshop organizers published the 500-page book Intelligent User Interfaces in 1991, and ACM’s global First International Workshop on Intelligent User Interfaces met in 1993. The ACM IUI conference will hold its 30th annual meeting in Italy in 2025, and the call for papers has an interesting list of topics.

AI agents are AI systems that can perceive their environment and autonomously act in response to and on objects in that environment.  The conference call for papers has the term “agent” only once in its long list of topics. The discussion of agents is spread across “Intelligent Applications,” which indicates AI systems are primarily defined as tools to be controlled by users.


From Passive Tools to Human-like Agents

The CEO of NVIDIA, Jensen Huang, described the current paradigm shift as occurring because of the introduction of autonomous AI systems as “agents”. 

“Computers, software– they're an industry of tools. For the very first time, this is going to be an industry of skills, and you capture that phrase, and you call it agents. But it's going to be, for the very first time, an agent sitting on top of tools [as] agents using tools. The opportunity for agents is gigantic. …

It sounds insane, but here's the amazing thing. We're going to have agents that obviously understand the subtleties of the things that we ask it to do, but it can also use tools and it can reason, and it can reason with each other and collaborate with each other.” 

In the past, a mathematical algorithm was identified as an AI system if it learned to categorize objects by observing randomly generated category members. Now, people think of AI systems as intelligent assistants, and they are evolving into proactive agents that predict user requests and act as collaborators. 

Yu Huang defined and explained the five levels of AI agents shown in the table below. The first three levels differ in the baseline AI, and then each higher level adds two or more human attributes.

The first set of attributes, perception and action, are the interpreted sensory input function and the motor output function. According to Huang, the more advanced models gain reasoning and decision-making, then memory and reflection, and then generalization and autonomous learning. 

The highest level includes personality and collaborative behavior among multiple agents. Personality includes character patterns with cognitive and emotional attributes. 

It is interesting to identify the parallels between Huang’s sequence and my summary of Dreyfus and Dreyfus’ sequence for human learning shown below. 

STAGE NAME: FUNDAMENTAL FOCUS OF COGNITIVE PROCESSING

  1. Novice: Objects (each object has attributes, such as shape and permitted interactions with it). This uses perception and action to act on Gibson affordances, which relate an agent’s capabilities to the features available for action in the environment.  

  2. Advanced Beginner: Situation (similarity of current situation to one or more previous situations). This introduces the use of reasoning and decision-making because they are necessary to understand and act in situations.

  3. Competence: Patterns (abstraction and recognize similarities across situations). This introduces the memories of situations and the capability of reflection to abstract similarities among them.

  4. Proficiency: Goal (reevaluation of similarities based on a particular criterion for success). This introduces generalization and autonomous learning.

  5. Expert: Roles (the playing style of an opponent is included as a factor when reevaluating the weightings). This introduces a knowledge of the “other’s” character patterns with cognitive and emotional attributes. 

  6. Mastery: Purpose (allows conscious control to override and/or modify any learned response). This level is solely a human capability that allows humans to ‘stand outside themselves’ because of the difference between self-awareness and consciousness.

As AI systems shift from useful tools to collaborative agents, the way we interact with them will change dramatically. Agentic AI is evolving rapidly, and AI agents, such as future releases of Pieces, will be able to overhaul typical software development workflows to make work easier and developers even more productive.

According to Jensen Huang, “Working with AI will soon be just like onboarding new employees.” He has explained that the computer industry will be reinvented in all aspects.

You can't solve this new way of doing computing by just designing a chip. Every aspect of the computer has fundamentally changed, and so everything from the networking to the switching to the way that computers are designed to the chips themselves, all of the software that sits on top of it and the methodology that pulls it all together, it's a big deal because it's a complete reinvention of the computer industry.


Human AI Collaboration Is Coming Soon

Within a short time of beginning to interact with a natural-language processing (NLP) computer, people attribute human-like feelings to it. This first happened in 1966 when an MIT researcher deployed an NLP system called ELIZA. There was no database storage or reasoning—only an “active listening” process that echoed back the input prompt as a question.

Gradual progress was made over the years, with IBM building the first (small) statistical language model in the 1980s. ChatGPT, the first large language model (LLM), was released near the end of 2022. The time between releases of more advanced, larger models has now decreased to weeks, sometimes even days. There has been a definite shift in AI-human interaction.

For example, Pieces’ unique workflow design allows users to interact with the more advanced AI models in ways no other tool provides. A recent blog post provides 20 novel AI prompts (with demos) for many different types of questions as prompts. The examples can be used as templates for four types of prompts:

  1. Researching and Problem Solving in the Browser

  2. Coding in the IDE

  3. Collaborating with Colleagues

  4. Context Switching Across Various Tools and Improving Productivity

The fourth category is focused on a new capability that is intensely valuable now and will become even more valuable in the future. Pieces maintains a “Live Context” workstream that supports the user’s workstream as work is done. The example questions include:

  • What from yesterday’s research can help me here?

  • What are the steps to implement this tool based on their docs?

  • When’s the best time to set up a sync with my team based on our shared calendars?

  • What are the main roadblocks for this release based on open GitHub issues?

The AI context memory that supports these types of questions is the local (on-device) central repository that syncs across the three main pillars of a developer's workflow—(1) researching and problem-solving in the browser, (2) coding in the IDE, and (3) collaborating with colleagues on communication platforms. It also supports broader use cases for reducing context switching and improving overall productivity.

Pieces’ Workstream Pattern Engine autonomously stores information from the user’s workflow. There is no danger of it keeping private information or being open to security vulnerabilities. It is there so that the user can “remember anything, interact with everything.”

Soon, Pieces will release a proactive agent that provides a “feed” technology. It gives developers helpful resources based on things they interact with. The engine knows what you are working on, what you have worked on in the past, and what you might work on in the near future. 

For example, when a user clicks on a new repository, Pieces will be able to instantly supply useful code snippets and links based on the code in that repository. When this is coupled with an autonomous AI Agent, there could be a collaborative increase in productivity. In the future, Pieces also will solve problems straight from your command line. 

On the day I originally submitted this post, GPT-4o was released with a beta feature called Canvas. It divides the chat into two parts, one stream for the user and the other for ChatGPT. For example, prompting for code will display the code on a canvas area that is separate from the chat conversation.

The focus of the release is their “Training the model to become a collaborator.” It states:

“We trained GPT-4o to collaborate as a creative partner. The model knows when to open a canvas, make targeted edits, and fully rewrite. It also understands broader context to provide precise feedback and suggestions.” 

This is a new version of ChatGPT-4o rather than just the old version within a wrapper.


Conclusion

Pieces’ live context is active now. In the future, Pieces will take natural input from the user’s normal workflow activities and act on their behalf. It will solve tasks for you based on what it learns from your current work. 

If you want to build on-device AI agents now, and not be concerned about cloud-based providers, explore the Pieces Python CLI open-source project. It intends to be the central AI operating system for a world of AI agents.

If human and AI interaction interests you, here are some suggested sources for the latest news and updates about human-AI interaction.

User-interaction researchers have created and studied guidelines for human-AI interaction. The researchers who wrote “A Comparative Analysis of Industry Human-AI Interaction Guidelines” examined the sets of guidelines issued by Google, Microsoft, Apple, and others. The researchers have created an open-source website for AI interaction guidelines

Human-robot interaction AI applications currently exist, and robots can act as autonomous agents. There is a web portal for that information, and ACM Transactions on Human-Robot Interaction is a peer-reviewed interdisciplinary journal on human-robot interaction.

Martha J. Lindeman, Ph.D.
Martha J. Lindeman, Ph.D.

Written by

Written by

Martha J. Lindeman, Ph.D.

Martha J. Lindeman, Ph.D.

SHARE

SHARE

Title

Title

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.

our newsletter

Sign up for The Pieces Post

Check out our monthly newsletter for curated tips & tricks, product updates, industry insights and more.