Researchers, we’re due for a heart to heart about AI. We’re inspired to see so many researchers asking great questions and discussing the risks associated with AI. Even though we've had AI features in our app for a while now, we've released many new AI features these last couple months and recognize the conversations has shifted to a deeper level, that requires deeper perspective.

The risks of AI is a topic we've been following and investing in to help us understand more nuance, to better identify and communicate the risks in our work, and begin to mitigate or eliminate risks when possible. To support those efforts we developed 9 principles to guide our thinking, product development, and research related to AI.

We call them principles because even in the rapidly changing landscape of AI, while practices might change principles should remain consistent. By sharing these mental models, we hope to create accountability and honor the important research happening on our platform. We also hope that these principles might inspire other technology companies to adopt similar perspectives or share their own.

Here are Notably's 8 principles for researching, designing, and developing products with AI:

Principle #1

Acknowledge AI Risk

Acknowledge AI Risk

Too much time is already spent trying to advocate and establish shared consensus about the risks of AI. While there is a lot of attention paid to future AI doomsday scenarios, a growing group of researchers and scholars are sounding the alarm about the present-day risks of AI, such as racism, sexism, and discrimination. At this point, their claims are clear and the evidence is apparent. All that remains is for the industry to believe them.

It was inspiring to discover a framework to help us categorize and learn about these types of risks, and to explore how we could begin to mitigate them in Notably. Denying these risks exist delays progress and ability toward mitigating them. It also delays the ethical innovation needed to reduce potential doomsday scenarios. We're confident there are many paths available to navigate AI risks, including things like documentation, audits, and transparency. These are the steps we're taking at Notably and the steps we implore other companies building with this technology to explore as well.

Principle #2

AI is optional

AI is Optional

The job of research is not easy. If it were easy, then it wouldn't require research. Depending on the research and the researcher, there are aspects of research where AI can help, and there are aspects where AI could do more harm than good.

Although we strive to avoid the latter, we acknowledge that there will always be research scenarios where AI is not the right choice. That's why this principle is an important one. It means that anything you can accomplish with AI in Notably, you can also achieve without AI in Notably.

Initially, this wasn't much of a challenge for us, since we already had many human-led features for conducting research, and we followed on with complementary AI-powered versions. However, as we continue to develop new features, this principle ensures that Notably remains flexible and able to conform to the methods, practices, and best judgment of researchers.

Principle #3

Privacy is more important than training algorithms

Privacy is more important than training algorithms

In today's era, data is commoditized like the new oil. In a sense, privacy is like a non-renewable resource as well. There are many social values derived from the right to privacy, and privacy is gone, it's gone for good. The sensitivity of data has become a major concern, especially when it comes to research. Organizations often work with private data that needs to be kept confidential. The insights from research and development have become an organization's most valuable asset. To address this concern, this principle works to ensure the privacy and security of our customers' data. This can already be seen in our policies to not use AI models that store data after its sent for processing, as it can be a potential threat to customers' privacy.

We also default to opting out of using data created on Notably to train or improve models outside of a private workspace. This means that we do not use customer's data in any way for training that crosses the boundaries of their own account. For instance, with our suggested tags feature, we could shorten the time it typically takes to improve the accuracy of suggested tags for our customers by sharing data across workspaces, but we choose not to. We believe that our customers' privacy and security should be our top priority, even if it means more work or inconveniences in the short-term, in the long-term we know it's a principle worth protecting and advocating for.

At Notably, we are committed to providing our customers with a private and secure workspace while ensuring that their data is protected. We believe that privacy is a fundamental right and it shouldn't be compromised.

Principle #4

Data Labeling with Extreme Caution

Data Labeling with Extreme Caution

Data labeling is a task that AI was created to do, but it also carries significant risks. To us, data labeling involves classifying a piece of research, such as a small snippet of text, with a tag, theme, or other research code.

The risk of labeling data with AI is that those labels are often used to create or identify patterns, and those patterns go on to become the thesis if insights and the catalysts of change. However, if data is incorrectly labeled in the early phases of research, then the insights and conclusions based on that data may be incorrect in ways that create falsehoods, introduce bias, and create a false sense of safety, ethics, and evidence-based decision-making.

One area where we use AI for data labeling in Notably is in sentiment analysis. AI-generated sentiment results are accurate about 70-80% of the time, according to our testing experience. Typically sentiment plays a small role in analysis, often alongside other variables. There is also an existing level of shared understanding for sentiment and its labels (positive, negative, and neutral). However, when we think about some of the data labeling practices our competitors are embracing in their products, such as auto clustering by themes, we've paused to ask if just because we could use AI for labeling, does it mean we should?

And in that pause we found that the task of data labeling with AI isn't as accurate or advanced as some might assume. In our experience testing with many labels across many datasets, the output was not reliable. This is especially true for unique labels, like the ones often used for things like themes and tags in research, where the definition of the labels is contextual and specific to the research at hand. Therefore, we are skeptical that the present day reliability of data labeling for research meets a standard for the benefit to outweigh the risk.

Create your own AI-powered templates for better, faster research synthesis. Discover new customer insights from data instantly.

Principle #5

The Better AI Gets, the More Responsibility We Have

The Better AI Gets, the More Responsibility We Have

Early research indicates that when people interact with effective AI, they quickly begin to trust and rely on the technology. For example, the more human-like an AI chatbot experience is, the more likely a user may be to share sensitive information.

An example specific to Notably is that the more accurate and powerful insights are generated by AI, the less likely a researcher might be to trust their own instincts or to look for gaps that AI might have missed.

There is a correlation between positive outcomes with AI and increased risk. In other words, the better AI gets, the more we can assume that humans will adjust their behaviors in ways that create new risks that were not anticipated. This example of the more you use it, the more harmful or riskier it gets isn't so different from the hard lessons product development teams are learning right now about the negative mental health aspects of social media use in teens. As designers and developers it's our responsibility to learn from the mistakes of our industry and not perpetuate the same harmful practices. To build a technology industry with long-term outcomes from AI that are positive requires a higher level of thinking than the last generation of software design. That's why it's more important than ever that technology companies invest in experts across the fields of user research, psychology, and ethics.

As AI technologies rapidly evolve, we are committed to avoiding the same mistakes made by well-intentioned innovators who failed to look ahead and imagine the logical conclusions of mass adoption.

Principle #6

Avoid perpetuating the status quo

Avoid perpetuating the status quo

One of the risks of using AI or specifically LLMs to analyze research data is that their design and training methods have a predisposition for maintaining the status quo. By reinforcing existing patterns, LLMs can continue the status quo instead of finding solutions to address or improve it. This preservation of the present order goes against the purpose of most research efforts, which is to understand problems that have not yet been solved and to discover alternatives to the status quo.

Using LLMs can entrench existing biases and prejudices. Additionally, because LLMs rely on existing datasets for training, they might fail to challenge established knowledge and perspectives, leading to a lack of new ideas, solutions, or paths to meaningful change.

Acknowledging this predisposition is necessary to actively mitigate, counterbalance, and educate on these tendencies of AI.

Principle #7

Don't Assume Old World QA Practices are Enough in the New World

Don't Assume Old World QA Practices are Enough in the New World

As product developers, we have to acknowledge that the concept of "edge cases" no longer exists as it once did due to generative AI. Unlike non-AI features, testing AI features for functionality and user experience (UX) is not enough since the context of the research, the data used, and the output sent back all affect whether a feature works and feels natural, and these factors are not always visible or predictable. As a result, while edge cases were once somewhat limited, the potential for edge cases in AI features is now endless. Even with rigorous testing using diverse datasets, there is always a chance that a prompt could break or generate harmful or offensive content. So, we rely on multiple datasets during QA, despite the time and resources required, to improve the quality of outputs.

To address these modern problems, we’re adapting our QA practices by requesting feedback about the quality of AI outputs. While this level of intrusion may not be acceptable in a feature where inputs and outputs are predictable and routine, we hope these new QA practices will give us a way to monitor not only for accuracy, but also for potential harms, biases, and niche dataset examples where more specific prompts are necessary.

Principle #8

Clear & kind

Clear & kind

AI is complex technology that does not require end users to understand how it works in order for them to use it. People can benefit from AI and be harmed by AI, all while not understanding what AI is or even that AI has played a role in their experience.

This scenario where there is complex subject matter and a unsuspecting public creates an opportunity for false or exaggerated claims to be made about AI by people who can benefit from “hype” or misinformation. Sadly this isn’t a new phenomenon, but there is a growing trend we see in marketing AI products and discussing AI use online where promoters of certain products will say things like, “AI isn’t going to replace you as a researcher, but a researcher using AI will” as an oversimplified tactic to motivate someone to use AI out of fear.

This isn’t Notably. We accept the challenge to take a complex subject and make it as accessible as possible to as many as possible, with clear and kind language. To turn very real concerns about the future impact AI will have on the job market, into a cold marketing tactic doesn’t feel right. We hope people will check out AI and our AI features because they are curious, because they need to save time, because they want to improve the quality of their insights… but never because we bullied them into thinking that if they don’t use AI they will lose their jobs or that their skills are redundant.

Principle #9

More Humanity, Not Less

More Humanity, Not Less

While AI makes it easier than ever to speed up research through automation, the need for high-quality, authentic human experiences has never been greater. Rather than viewing research speed as the singular desired outcome, we prefer to view speed in relation to quality. The best way to ensure high quality is to make it easy for researchers to conduct research that emphasizes authentic human connections.

While some newer research platforms may offer shortcuts that decrease humanity, such as AI-generated users to test prototypes or even more classic platforms that match researchers with anonymous participants, at Notably we recognize that real human interactions are even more valuable in the age of AI.

By using AI to speed up certain aspects of research, it creates more opportunities to establish longer lasting relationships with participants, have deeper conversations, and for researchers to be more present during their interviews and focus groups. In the modern competitive landscape, many aspects of innovation and business are not unique competitive advantages. However, your ability to truly connect with people who care enough about your product or mission to participate in research and to share their experiences, that is nearly impossible for competitors to replicate. It’s also the kind of humanity-centered innovation the world needs more of.

Give your research synthesis superpowers.

Try Teams for 7 days

Free for 1 project