Intelligent writing aids have been extensively investigated for many various writing objectives and activities. The main target of recent advancements in writing helpers has been Large Language Models (LLMs), which enable individuals to provide material in response to a prompt by providing their purpose. Necessary developments in LLMs like ChatGPT and its use in common products highlight their potential as writing helpers. Nevertheless, the human-computer interface with these assistants reveals significant usability issues, including coherence and fluency of the model output, trustworthiness, ownership of the created material, and predictability of model performance.
While among the interactional components of writing assistants have been studied in earlier publications, there has yet to be a focused try to satisfy end-to-end writing goals and approach their interactions from a usability perspective. These problems often result in users needing help to utilize the tools successfully to perform their writing goals and sometimes result in users giving up completely. Researchers from McGill University and Université de Montréal examine the interface design of LLM-supported intelligent writing assistants, emphasizing human activities and drawing influence from previous research and design literature. Additionally they suggest using Norman’s seven motion phases as a design paradigm for LLM-supported intelligent writing helpers and analyzing the usability implications.
A cyclical cognitive model often known as Norman’s seven phases of motion is often used to grasp users’ thought processes and associated physical activities. It’s primarily used to tell system interface design. The seven steps of motion are (a) goal development, (b) plan, (c) specify, (d) perform, (e) perceive, (f) interpret, and (g) compare, as shown in Figure 1. Plan, specify, and execute phases make up the interaction’s execution phase, and perceive, interpret, and compare phases make up the evaluation phase. The user’s interactions are based on a mental model of the system they developed from past assumptions. They assert that this paradigm enables the creation and assessment of interfaces that facilitate fine-grained interactions with LLMs at various phases.
They suggest that efficient LLM-based writing assistance must answer the questions relevant to the varied stages to tell the design and provides the user the essential skills. They supply an example that was heavily influenced by their initial effort to make use of OpenAI’s Codex to jot down software tutorials to make clear their point further. In a typical interaction, the user begins by deciding on a primary objective, resembling making a lesson on easy methods to use matplotlib to plot data points. They then break down the aim into manageable components to assist them determine easy methods to approach the writing helper.
The essential objective, as an illustration, could also be broken down into three subgoals:
- Authoring tutorial sections
- Providing suitable instructions for library installation in various contexts
- Producing and explaining code snippets
- Increasing the tutorial’s readability
Though it has a narrower scope and might come after several cycles of the motion framework, each step in this case may also be considered a sub-goal. When customers ask the writing assistant for help, they often describe after which complete their request via the interface, for instance, “Write a code snippet to plot a scatter plot using matplotlib given the info points in a Python list and supply a proof of the code.”
The performing stage can include various interface capabilities to vary and update the prompts, while the particular stage could have systems to recommend alternative prompts to the model. The execution stage is influenced by the users’ prior conceptual models, their job and domain expertise, and each. When the writing assistant produces an output, the user reads, understands, and adjusts their preexisting mental models following their knowledge and skill. As an example, a user with substantial experience with matplotlib might be higher in a position to detect any unexpected material or mistakes within the resulting code. Moreover, it might be required to run any existing unit tests or execute the produced code snippet in an IDE to check the outcomes with resources in other contexts.
They contend that applying Norman’s seven stages of motion as a paradigm to research user behavior with LLM-based writing aids can offer a useful foundation for realizing and designing fine-grained interactions throughout the phases of goal formulation, execution, and assessment. It is feasible to pinpoint the essential interactions and direct the design of a writing assistant to assist with the work of making tutorials by posing questions pertinent to every step. It is feasible to unravel particular usability issues within the design of LLM-based writing tools by analyzing the devices and their features across the interaction design dimensions outlined by the framework. More ambitiously, they point to understudied study areas in human-LLM interactions, resembling aligning with user preferences, designing effective prompts, and the explainability and interpretability of model outputs.
Try the Paper. Don’t forget to affix our 20k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more. If you could have any questions regarding the above article or if we missed anything, be at liberty to email us at Asif@marktechpost.com
🚀 Check Out 100’s AI Tools in AI Tools Club
Aneesh Tickoo is a consulting intern at MarktechPost. He’s currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects geared toward harnessing the facility of machine learning. His research interest is image processing and is keen about constructing solutions around it. He loves to attach with people and collaborate on interesting projects.