Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
For enterprises, figuring out the right prompt to get the best result from a generative AI model is not always an easy task. In some organizations, that has fallen to the newfound position of prompt engineer, but thatâs not quite what has happened at LinkedIn.
The professional networking platform is owned by Microsoft and currently has more than 1 billion user accounts. Although LinkedIn is a large organization, it faced the same basic challenge that organizations of nearly any size faces with gen AI â bridging the gap between technical and non-technical business users. For LinkedIn, the gen AI use case is both end-user and internal user facing.Â
While some organizations might choose to just share prompts with spreadsheets or even just in Slack and messaging channels, LinkedIn took a somewhat novel approach. The company built what it calls a âcollaborative prompt engineering playgroundâ that enables technical and non-technical users to work together. The system uses a really interesting combination of technologies including large language models (LLMs), LangChain and Jupyter Notebooks.
LinkedIn has already used the approach to help improve its sales navigator product with AI features, specifically focusing on AccountIQ â a tool that reduces company research time from 2 hours to 5 minutes.
Much like every other organization on the planet, LinkedInâs initial gen AI journey started out by just trying to figure out what works.
âWhen we started working on projects using gen AI, product managers always had too many ideas, like âHey, why canât we try this? Why canât we try that,’â Ajay Prakash, LinkedIn staff software engineer, told VentureBeat. âThe whole idea was to make it possible for them to do the prompt engineering and try out different things, and not have the engineers be the bottleneck for everything.â
The organizational challenge of deploying gen AI in a technical enterprise
To be sure, LinkedIn is no stranger to the world of machine learning (ML) and AI.
Before ChatGPT ever came onto the scene, LinkedIn had already built a toolkit to measure AI model fairness. At VB Transform in 2022, the company outlined its AI strategy (at that time). Gen AI, however is a bit different. It doesnât specifically require engineers to use and is more broadly accessible. Thatâs the revolution that ChatGPT sparked. Building gen AI-powered applications is not entirely the same as building a traditional application.
Prakash explained that before gen AI, engineers would typically get a set of product requirements from product management staff. They would then go out and build the product.Â
With gen AI, by contrast, product managers are trying out different things to see whatâs possible and what works. As opposed to traditional ML that wasnât accessible to non-technical staff, gen AI is easier for all types of users.
Traditional prompt engineering often creates bottlenecks, with engineers serving as gatekeepers for any changes or experiments. LinkedInâs approach transforms this dynamic by providing a user-friendly interface through customized Jupyter Notebooks, which have traditionally been used for data science and ML tasks.
Whatâs inside the LinkedIn prompt engineering playground
It should come as no surprise that the default LLM vendor used by LinkedIn is OpenAI. After all, LinkedIn is part of Microsoft, which hosts the Azure OpenAI platform.
Lukasz Karolewski, LinkedInâs senior engineering manager, explained that it was just more convenient to use OpenAI, as his team had easier access within the LinkedIn/Microsoft environment. He noted that using other models would require additional security and legal review processes, which would take longer to make them available. The team initially prioritized getting the product and idea validated rather than optimizing for the best model. Â
The LLM is only one part of the system, which also includes:
Jupyter Notebooks for the interface layer;
LangChain for prompt orchestration;
Trino for data lake queries during testing;
Container-based deployment for easy access;
Custom UI elements for non-technical users.
How LinkedInâs collaborative prompt engineering playground works
Jupyter Notebooks have been widely-used in the ML community for nearly a decade as a way to help define models and data using an interactive Python language interface.
Karolewski explained that LinkedIn pre-programmed Jupyter Notebooks to make them more accessible for non-technical users. The notebooks include UI elements like text boxes and buttons that make it easier for any type of user to get started. The notebooks are packaged in a way that allows users to easily launch the environment with minimal instructions, and without having to set up a complex development environment. The main purpose is to let both technical and non-technical users experiment with different prompts and ideas for using gen AI.
To make this work, the team also integrated access to data from LinkedInâs internal data lake. This allows users to pull in data in a secure way to use in prompts and experiments.
LangChain serves as the library for orchestrating gen AI applications. The framework helps the team to easily chain together different prompts and steps, such as fetching data from external sources, filtering and synthesizing the final output.Â
While LinkedIn is not currently focused on building fully autonomous, agent-based applications, Karolewski said he sees LangChain as a foundation for potentially moving in that direction in the future.
LinkedInâs approach also includes multi-layered evaluation mechanisms:
Embedding-based relevance-checking for output validation;
Automated harm detection through pre-built evaluators;
LLM-based evaluation using larger models to assess smaller ones;
Integrated human expert review processes.
From hours to minutes: Real-world impact for the prompt engineering playground
The effectiveness of this approach is demonstrated through LinkedInâs AccountIQ feature, which reduced company research time from two hours to five minutes.
This improvement wasnât just about faster processing â it represented a fundamental shift in how AI features could be developed and refined with direct input from domain experts.
âWeâre not domain experts in sales,â said Karolewski. âThis platform allows sales experts to directly validate and refine AI features, creating a tight feedback loop that wasnât possible before.â
While LinkedIn isnât planning to open source its gen AI prompt engineering playground due to its deep integration with internal systems, the approach offers lessons for other enterprises looking to scale AI development. Although the full implementation might not be available, the same basic building blocks â namely an LLM, LangChain and Jupyter notebooks â are available for other organizations to build out a similar approach. Â
Both Karolewski and Prakash emphasized that with gen AI, itâs critical to focus on accessibility. Itâs also important to enable cross-functional collaboration from the start.
âWe got a lot of ideas from the community, and we learned a lot from the community,â said Lukasz. âWeâre primarily curious what other people think and how theyâre bringing expertise from subject matter experts into engineering teams.â





Be the first to comment