This post was co-written with Anthony Medeiros, Manager of Solutions Engineering and Architecture for North America Artificial Intelligence, and Blake Santschi, Business Intelligence Manager, from Schneider Electric. Additional Schneider Electric experts include Jesse Miller, Somik Chowdhury, Shaswat Babhulgaonkar, David Watkins, Mark Carlson and Barbara Sleczkowski. 

Enterprise Resource Planning (ERP) systems are used by companies to manage several business functions such as accounting, sales or order management in one system. In particular, they are routinely used to store information related to customer accounts. Different organizations within a company might use different ERP systems and merging them is a complex technical challenge at scale which requires domain-specific knowledge.

Schneider Electric is a leader in digital transformation of energy management and industrial automation. To best serve their customers’ needs, Schneider Electric needs to keep track of the links between related customers’ accounts in their ERP systems. As their customer base grows, new customers are added daily, and their account teams have to manually sort through these new customers and link them to the proper parent entity.

The linking decision is based on the most recent information available publicly on the Internet or in the media, and might be affected by recent acquisitions, market news or divisional re-structuring. An example of account linking would be to identify the relationship between Amazon and its subsidiary, Whole Foods Market [source].

Schneider Electric is deploying large language models for their capabilities in answering questions in various knowledge specific domains, the date the model has been trained is limiting its knowledge. They addressed that challenge by using a Retriever-Augmented Generation open source large language model available on Amazon SageMaker JumpStart to process large amounts of external knowledge pulled and exhibit corporate or public relationships among ERP records.

In early 2023, when Schneider Electric decided to automate part of its accounts linking process using artificial intelligence (AI), the company partnered with the AWS Machine Learning Solutions Lab (MLSL). With MLSL’s expertise in ML consulting and execution, Schneider Electric was able to develop an AI architecture that would reduce the manual effort in their linking workflows, and deliver faster data access to their downstream analytics teams.

Generative AI

Generative AI and large language models (LLMs) are transforming the way business organizations are able to solve traditionally complex challenges related to natural language processing and understanding. Some of the benefits offered by LLMs include the ability to comprehend large portions of text and answer related questions by producing human-like responses. AWS makes it easy for customers to experiment with and productionize LLM workloads by making many options available via Amazon SageMaker JumpStart, Amazon Bedrock, and Amazon Titan.

External Knowledge Acquisition

LLMs are known for their ability to compress human knowledge and have demonstrated remarkable capabilities in answering questions in various knowledge specific domains, but their knowledge is limited by the date the model has been trained. We address that information cutoff by coupling the LLM with a Google Search API to deliver a powerful Retrieval Augmented LLM (RAG) that addresses Schneider Electric’s challenges. The RAG is able to process large amounts of external knowledge pulled from the Google search and exhibit corporate or public relationships among ERP records.

See the following example:

Question: Who is the parent company of One Medical?
Google query: “One Medical parent company” → information → LLM
Answer: One Medical, a subsidiary of Amazon…

The preceding example (taken from the Schneider Electric customer database) concerns an acquisition that happened in February 2023 and thus would not be caught by the LLM alone due to knowledge cutoffs. Augmenting the LLM with Google search guarantees the most up-to-date information.

Flan-T5 model

In that project we used Flan-T5-XXL model from the Flan-T5 family of models.

The Flan-T5 models are instruction-tuned and therefore are capable of performing various zero-shot NLP tasks. In our downstream task there was no need to accommodate a vast amount of world knowledge but rather to perform well on question answering given a context of texts provided through search results, and therefore, the 11B parameters T5 model performed well.

JumpStart provides convenient deployment of this model family through Amazon SageMaker Studio and the SageMaker SDK. This includes Flan-T5 Small, Flan-T5 Base, Flan-T5 Large, Flan-T5 XL, and Flan-T5 XXL. Furthermore, JumpStart provides a few versions of Flan-T5 XXL at different levels of quantization. We deployed Flan-T5-XXL to an endpoint for inference using Amazon SageMaker Studio Jumpstart.

Schneider Electric leverages Retrieval Augmented LLMs on SageMaker to ensure real-time updates in their ERP systems | Amazon Web Services

Retrieval Augmented LLM with LangChain

Amazon SageMaker Studio Jumpstart. LangChain performs the overall orchestration and allows the search result pages be fed into the Flan-T5-XXL instance.

The Retrieval-Augmented Generation (RAG) consists of two steps:

  1. Retrieval of relevant text chunks from external sources
  2. Augmentation of the chunks with context in the prompt given to the LLM.

For Schneider Electric’ use-case, the RAG proceeds as follows:

  1. The given company name is combined with a question like “Who is the parent company of X”, where X is the given company) and passed to a google query using the Serper AI
  2. The extracted information is combined with the prompt and original question and passed to the LLM for an answer.

The following diagram illustrates this process.

RAG Workflow

Use the following code to create an endpoint:

# Spin FLAN-T5-XXL Sagemaker Endpoint
llm = SagemakerEndpoint(...)

Instantiate search tool:

search = GoogleSerperAPIWrapper()
search_tool = Tool( name="Search", func=search.run, description="useful for when you need to ask with search", verbose=False)

In the following code, we chain together the retrieval and augmentation components:

my_template = """
Answer the following question using the information. n
Question : {question}? n
Information : {search_result} n
Answer: """
prompt_template = PromptTemplate( input_variables=["question", 'search_result'], template=my_template)
question_chain = LLMChain( llm=llm, prompt=prompt_template, output_key="answer") def search_and_reply_company(company): # Retrieval search_result = search_tool.run(f"{company} parent company") # Augmentation output = question_chain({ "question":f"Who is the parent company of {company}?", "search_result": search_result}) return output["answer"] search_and_reply_company("Whole Foods Market") "Amazon"

The Prompt Engineering

The combination of the context and the question is called the prompt. We noticed that the blanket prompt we used (variations around asking for the parent company) performed well for most public sectors (domains) but didn’t generalize well to education or healthcare since the notion of parent company is not meaningful there. For education, we used “X” while for healthcare we used “Y”.

To enable this domain specific prompt selection, we also had to identify the domain a given account belongs to. For this, we also used a RAG where a multiple choice question “What is the domain of {account}?” as a first step, and based on the answer we inquired on the parent of the account using the relevant prompt as a second step. See the following code:

my_template_options = """
Answer the following question using the information. n
Question :  {question}? n
Information : {search_result} n
Options :n {options} n
Answer: """ prompt_template_options = PromptTemplate(
input_variables=["question", 'search_result', 'options'],
template=my_template_options)
question_chain = LLMChain( llm=llm, prompt=prompt_template_options, output_key="answer") my_options = """
- healthcare
- education
- oil and gas
- banking
- pharma
- other domain """ def search_and_reply_domain(company):
search_result = search_tool.run(f"{company} ")
output = question_chain({ "question":f"What is the domain of {company}?", "search_result": search_result, "options":my_options})
return output["answer"] search_and_reply_domain("Exxon Mobil") "oil and gas"

The sector specific prompts have boosted the overall performance from 55% to 71% of accuracy. Overall, the effort and time invested to develop effective prompts appear to significantly improve the quality of LLM response.

RAG with tabular data (SEC-10k)

The SEC 10K filings is another reliable source of information for subsidiaries and subdivisions filed annually by a publicly traded companies. These filings are available directly on SEC EDGAR or through  CorpWatch API.

We assume the information is given in tabular format. Below is a pseudo csv dataset that mimics the original format of the SEC-10K dataset. It is possible to merge multiple csv data sources into a combined pandas dataframe:

# A pseudo dataset similar by schema to the CorpWatch API dataset
df.head()

index	relation_id source_cw_id	target_cw_id	parent subsidiary 1 90 22569 37 AMAZON WHOLE FOODS MARKET
873 1467 22569 781 AMAZON TWITCH
899 1505 22569 821 AMAZON ZAPPOS
900 1506 22569 821 AMAZON ONE MEDICAL
901 1507 22569 821 AMAZON WOOT!

The LangChain provides an abstraction layer for pandas through create_pandas_dataframe_agent.  There are two key advantages to using LangChain/LLMs for this task:

  1. Once spun up, it allows a downstream consumer to interact with the dataset in natural language rather than code
  2. It is more robust to misspellings and different ways of naming accounts.

We spin the endpoint as above and create the agent:

# Create pandas dataframe agent
agent = create_pandas_dataframe_agent(llm, df, varbose=True)

In the following code, we query for the parent/subsidiary relationship and the agent translates the query into pandas language:

# Example 1
query = "Who is the parent of WHOLE FOODS MARKET?"
agent.run(query) #### output
> Entering new AgentExecutor chain...
Thought: I need to find the row with WHOLE FOODS MARKET in the subsidiary column
Action: python_repl_ast
Action Input: df[df['subsidiary'] == 'WHOLE FOODS MARKET']
Observation:
source_cw_id	target_cw_id	parent subsidiary
22569 37 AMAZON WHOLE FOODS MARKET
Thought: I now know the final answer
Final Answer: AMAZON
> Finished chain.
# Example 2
query = "Who are the subsidiaries of Amazon?"
agent.run(query)
#### output
> Entering new AgentExecutor chain...
Thought: I need to find the row with source_cw_id of 22569
Action: python_repl_ast
Action Input: df[df['source_cw_id'] == 22569]
...
Thought: I now know the final answer
Final Answer: The subsidiaries of Amazon are Whole Foods Market, Twitch, Zappos, One Medical, Woot!...> Finished chain. 'The subsidiaries of Amazon are Whole Foods Market, Twitch, Zappos, One Medical, Woot!.'

Conclusion

In this post, we detailed how we used building blocks from LangChain to augment an LLM with search capabilities, in order to uncover relationships between Schneider Electric’s customer accounts. We extended the initial pipeline to a two-step process with domain identification before using a domain specific prompt for higher accuracy.

In addition to the Google Search query, datasets that detail corporate structures such as the SEC 10K filings can be used to further augment the LLM with trustworthy information. Schneider Electric team will also be able to extend and design their own prompts mimicking the way they classify some public sector accounts, further improving the accuracy of the pipeline. These capabilities will enable Schneider Electric to maintain up-to-date and accurate organizational structures of their customers, and unlock the ability to do analytics on top of this data.


About the Authors

Anthony Medeiros is a Manager of Solutions Engineering and Architecture at Schneider Electric. He specializes in delivering high-value AI/ML initiatives to many business functions within North America. With 17 years of experience at Schneider Electric, he brings a wealth of industry knowledge and technical expertise to the team.

Blake Sanstchi is a Business Intelligence Manager at Schneider Electric, leading an analytics team focused on supporting the Sales organization through data-driven insights.

Joshua LevyJoshua Levy is Senior Applied Science Manager in the Amazon Machine Learning Solutions lab, where he helps customers design and build AI/ML solutions to solve key business problems.

Kosta Belz is a Senior Applied Scientist with AWS MLSL with focus on Generative AI and document processing. He is passionate about building applications using Knowledge Graphs and NLP. He has around 10 years of experience in building Data & AI solutions to create value for customers and enterprises.

Aude Genevay is an Applied Scientist in the Amazon GenAI Incubator, where she helps customers solve key business problems through ML and AI. She previously was a researcher in theoretical ML and enjoys applying her knowledge to deliver state-of-the-art solutions to customers.

Md Sirajus Salekin is an Applied Scientist at AWS Machine Learning Solution Lab. He helps AWS customers to accelerate their business by building AI/ML solutions. His research interests are multimodal machine learning, generative AI, and ML applications in healthcare.

Zichen Wang, PhD, is a Senior Applied Scientist in AWS. With several years of research experience in developing ML and statistical methods using biological and medical data, he works with customers across various verticals to solve their ML problems.

Anton Gridin is a Principal Solutions Architect supporting Global Industrial Accounts, based out of New York City. He has more than 15 years of experience building secure applications and leading engineering teams.

Source: https://aws.amazon.com/blogs/machine-learning/schneider-electric-leverages-retrieval-augmented-llms-on-sagemaker-to-ensure-real-time-updates-in-their-erp-systems/



You might also like this video

Leave a Reply

Available for Amazon Prime