This is a guest post by A.K Roy from Qualcomm AI.

Amazon Elastic Compute Cloud (Amazon EC2) DL2q instances, powered by Qualcomm AI 100 Standard accelerators, can be used to cost-efficiently deploy deep learning (DL) workloads in the cloud. They can also be used to develop and validate performance and accuracy of DL workloads that will be deployed on Qualcomm devices. DL2q instances are the first instances to bring Qualcomm’s artificial intelligent (AI) technology to the cloud.

With eight Qualcomm AI 100 Standard accelerators and 128 GiB of total accelerator memory, customers can also use DL2q instances to run popular generative AI applications, such as content generation, text summarization, and virtual assistants, as well as classic AI applications for natural language processing and computer vision. Additionally, Qualcomm AI 100 accelerators feature the same AI technology used across smartphones, autonomous driving, personal computers, and extended reality headsets, so DL2q instances can be used to develop and validate these AI workloads before deployment.

New DL2q instance highlights

Each DL2q instance incorporates eight Qualcomm Cloud AI100 accelerators, with an aggregated performance of over 2.8 PetaOps of Int8 inference performance and 1.4 PetaFlops of FP16 inference performance. The instance has an aggregate 112 of AI cores, accelerator memory capacity of 128 GB and memory bandwidth of 1.1 TB per second.

Each DL2q instance has 96 vCPUs, a system memory capacity of 768 GB and supports a networking bandwidth of 100 Gbps as well as Amazon Elastic Block Store (Amazon EBS) storage of 19 Gbps.

Instance name vCPUs Cloud AI100 accelerators Accelerator memory Accelerator memory BW (aggregated) Instance memory Instance networking Storage (Amazon EBS) bandwidth
DL2q.24xlarge 96 8 128 GB 1.088 TB/s 768 GB 100 Gbps 19 Gbps

Qualcomm Cloud AI100 accelerator innovation

The Cloud AI100 accelerator system-on-chip (SoC) is a purpose-built, scalable multi-core architecture, supporting a wide range of deep-learning use-cases spanning from the datacenter to the edge. The SoC employs scalar, vector, and tensor compute cores with an industry-leading on-die SRAM capacity of 126 MB. The cores are interconnected with a high-bandwidth low-latency network-on-chip (NoC) mesh.

The AI100 accelerator supports a broad and comprehensive range of models and use-cases. The table below highlights the range of the model support.

Model category Number of models Examples​
NLP​ 157 BERT, BART, FasterTransformer, T5, Z-code MOE
Generative AI – NLP 40 LLaMA, CodeGen, GPT, OPT, BLOOM, Jais, Luminous, StarCoder, XGen
Generative AI – Image 3 Stable diffusion v1.5 and v2.1, OpenAI CLIP
CV – Image classification 45 ViT, ResNet, ResNext, MobileNet, EfficientNet
CV – Object detection 23 YOLO v2, v3, v4, v5, and v7, SSD-ResNet, RetinaNet
CV – Other 15 LPRNet, Super-resolution/SRGAN, ByteTrack
Automotive networks* 53 Perception and LIDAR, pedestrian, lane, and traffic light detection
Total​ >300 

* Most automotive networks are composite networks consisting of a fusion of individual networks.

The large on-die SRAM on the DL2q accelerator enables efficient implementation of advanced performance techniques such as MX6 micro-exponent precision for storing the weights and MX9 micro-exponent precision for accelerator-to-accelerator communication. The micro-exponent technology is described in the following Open Compute Project (OCP) industry announcement: AMD, Arm, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Standardize Next-Generation Narrow Precision Data Formats for AI » Open Compute Project.

The instance user can use the following strategy to maximize the performance-per-cost:

  • Store weights using the MX6 micro-exponent precision in the on-accelerator DDR memory. Using the MX6 precision maximizes the utilization of the available memory capacity and the memory-bandwidth to deliver best-in-class throughput and latency.
  • Compute in FP16 to deliver the required use case accuracy, while using the superior on-chip SRAM and spare TOPs on the card, to implement high-performance low-latency MX6 to FP16 kernels.
  • Use an optimized batching strategy and a higher batch-size by using the large on-chip SRAM available to maximize the reuse of weights, while retaining the activations on-chip to the maximum possible.

DL2q AI Stack and toolchain

The DL2q instance is accompanied by the Qualcomm AI Stack that delivers a consistent developer experience across Qualcomm AI in the cloud and other Qualcomm products. The same Qualcomm AI stack and base AI technology runs on the DL2q instances and Qualcomm edge devices, providing customers a consistent developer experience, with a unified API across their cloud, automotive, personal computer, extended reality, and smartphone development environments.

The toolchain enables the instance user to quickly onboard a previously trained model, compile and optimize the model for the instance capabilities, and subsequently deploy the compiled models for production inference use-cases in three steps shown in the following figure.

Amazon EC2 DL2q instance for cost-efficient, high-performance AI inference is now generally available | Amazon Web Services

To learn more about tuning the performance of a model, see the Cloud AI 100 Key Performance Parameters Documentation.

Get started with DL2q instances

In this example, you compile and deploy a pre-trained BERT model from Hugging Face on an EC2 DL2q instance using a pre-built available DL2q AMI, in four steps.

You can use either a pre-built Qualcomm DLAMI on the instance or start with an Amazon Linux2 AMI and build your own DL2q AMI with the Cloud AI 100 Platform and Apps SDK available in this Amazon Simple Storage Service (Amazon S3) bucket: s3://ec2-linux-qualcomm-ai100-sdks/latest/.

The steps that follow use the pre-built DL2q AMI, Qualcomm Base AL2 DLAMI.

Use SSH to access your DL2q instance with the Qualcomm Base AL2 DLAMI AMI and follow steps 1 thru 4.

Step 1. Set up the environment and install required packages

  1. Install Python 3.8.
    sudo amazon-linux-extras install python3.8

  2. Set up the Python 3.8 virtual environment.
    python3.8 -m venv /home/ec2-user/userA/pyenv

  3. Activate the Python 3.8 virtual environment.
    source /home/ec2-user/userA/pyenv/bin/activate

  4. Install the required packages, shown in the requirements.txt document available at the Qualcomm public Github site.
    pip3 install -r requirements.txt

  5. Import the necessary libraries.
    import transformers 
    from transformers import AutoTokenizer, AutoModelForMaskedLM
    import sys
    import qaic
    import os
    import torch
    import onnx
    from onnxsim import simplify
    import argparse
    import numpy as np

Step 2. Import the model

  1. Import and tokenize the model.
    model_card = 'bert-base-cased'
    model = AutoModelForMaskedLM.from_pretrained(model_card)
    tokenizer = AutoTokenizer.from_pretrained(model_card)

  2. Define a sample input and extract the inputIds and attentionMask.
    sentence = "The dog [MASK] on the mat."
    encodings = tokenizer(sentence, max_length=128, truncation=True, padding="max_length", return_tensors='pt')
    inputIds = encodings["input_ids"]
    attentionMask = encodings["attention_mask"]

  3. Convert the model to ONNX, which can then be passed to the compiler.
    # Set dynamic dims and axes.
    dynamic_dims = {0: 'batch', 1 : 'sequence'}
    dynamic_axes = {
        "input_ids" : dynamic_dims,
        "attention_mask" : dynamic_dims,
        "logits" : dynamic_dims
    }
    input_names = ["input_ids", "attention_mask"]
    inputList = [inputIds, attentionMask]
    
    torch.onnx.export(
        model,
        args=tuple(inputList),
        f=f"{gen_models_path}/{model_base_name}.onnx",
        verbose=False,
        input_names=input_names,
        output_names=["logits"],
        dynamic_axes=dynamic_axes,
        opset_version=11,
    )

  4. You’ll run the model in FP16 precision. So, you need to check if the model contains any constants beyond the FP16 range. Pass the model to the fix_onnx_fp16 function to generate the new ONNX file with the fixes required.
    from onnx import numpy_helper
            
    def fix_onnx_fp16(
        gen_models_path: str,
        model_base_name: str,
    ) -> str:
        finfo = np.finfo(np.float16)
        fp16_max = finfo.max
        fp16_min = finfo.min
        model = onnx.load(f"{gen_models_path}/{model_base_name}.onnx")
        fp16_fix = False
        for tensor in onnx.external_data_helper._get_all_tensors(model):
            nptensor = numpy_helper.to_array(tensor, gen_models_path)
            if nptensor.dtype == np.float32 and (
                np.any(nptensor > fp16_max) or np.any(nptensor < fp16_min)
            ):
                # print(f'tensor value : {nptensor} above {fp16_max} or below {fp16_min}')
                nptensor = np.clip(nptensor, fp16_min, fp16_max)
                new_tensor = numpy_helper.from_array(nptensor, tensor.name)
                tensor.CopyFrom(new_tensor)
                fp16_fix = True
                
        if fp16_fix:
            # Save FP16 model
            print("Found constants out of FP16 range, clipped to FP16 range")
            model_base_name += "_fix_outofrange_fp16"
            onnx.save(model, f=f"{gen_models_path}/{model_base_name}.onnx")
            print(f"Saving modified onnx file at {gen_models_path}/{model_base_name}.onnx")
        return model_base_name
    
    fp16_model_name = fix_onnx_fp16(gen_models_path=gen_models_path, model_base_name=model_base_name)

Step 3. Compile the model

The qaic-exec command line interface (CLI) compiler tool is used to compile the model. The input to this compiler is the ONNX file generated in step 2. The compiler produces a binary file (called QPC, for Qualcomm program container) in the path defined by -aic-binary-dir argument.

In the compile command below, you use four AI compute cores and a batch size of one to compile the model.

/opt/qti-aic/exec/qaic-exec 
-m=bert-base-cased/generatedModels/bert-base-cased_fix_outofrange_fp16.onnx 
-aic-num-cores=4 
-convert-to-fp16 
-onnx-define-symbol=batch,1 -onnx-define-symbol=sequence,128 
-aic-binary-dir=bert-base-cased/generatedModels/bert-base-cased_fix_outofrange_fp16_qpc 
-aic-hw -aic-hw-version=2.0 
-compile-only

The QPC is generated in the bert-base-cased/generatedModels/bert-base-cased_fix_outofrange_fp16_qpc folder.

Step 4. Run the model

Set up a session to run the inference on a Cloud AI100 Qualcomm accelerator in the DL2q instance.

The Qualcomm qaic Python library is a set of APIs that provides support for running inference on the Cloud AI100 accelerator.

  1. Use the Session API call to create an instance of session. The Session API call is the entry point to using the qaic Python library.
    qpcPath = 'bert-base-cased/generatedModels/bert-base-cased_fix_outofrange_fp16_qpc'
    
    bert_sess = qaic.Session(model_path= qpcPath+'/programqpc.bin', num_activations=1)  
    bert_sess.setup() # Loads the network to the device. 
    
    # Here we are reading out all the input and output shapes/types
    input_shape, input_type = bert_sess.model_input_shape_dict['input_ids']
    attn_shape, attn_type = bert_sess.model_input_shape_dict['attention_mask']
    output_shape, output_type = bert_sess.model_output_shape_dict['logits']
    
    #create the input dictionary for given input sentence
    input_dict = {"input_ids": inputIds.numpy().astype(input_type), "attention_mask" : attentionMask.numpy().astype(attn_type)}
    
    #run inference on Cloud AI 100
    output = bert_sess.run(input_dict)

  2. Restructure the data from output buffer with output_shape and output_type.
    token_logits = np.frombuffer(output['logits'], dtype=output_type).reshape(output_shape)

  3. Decode the output produced.
    mask_token_logits = torch.from_numpy(token_logits[0, mask_token_index, :]).unsqueeze(0)
    top_5_results = torch.topk(mask_token_logits, 5, dim=1)
    print("Model output (top5) from Qualcomm Cloud AI 100:")
    for i in range(5):
        idx = top_5_results.indices[0].tolist()[i]
        val = top_5_results.values[0].tolist()[i]
        word = tokenizer.decode([idx])
        print(f"{i+1} :(word={word}, index={idx}, logit={round(val,2)})")

Here are the outputs for the input sentence “The dog [MASK] on the mat.”

1 :(word=sat, index=2068, logit=11.46)
2 :(word=landed, index=4860, logit=11.11)
3 :(word=spat, index=15732, logit=10.95)
4 :(word=settled, index=3035, logit=10.84)
5 :(word=was, index=1108, logit=10.75)

That’s it. With just a few steps, you compiled and ran a PyTorch model on an Amazon EC2 DL2q instance. To learn more about onboarding and compiling models on the DL2q instance, see the Cloud AI100 Tutorial Documentation.

To learn more about which DL model architectures are a good fit for AWS DL2q instances and the current model support matrix, see the Qualcomm Cloud AI100 documentation.

Available now

You can launch DL2q instances today in the US West (Oregon) and Europe (Frankfurt) AWS Regions as On-demandReserved, and Spot Instances, or as part of a Savings Plan. As usual with Amazon EC2, you pay only for what you use. For more information, see Amazon EC2 pricing.

DL2q instances can be deployed using AWS Deep Learning AMIs (DLAMI), and container images are available through managed services such as Amazon SageMaker, Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Elastic Container Service (Amazon ECS), and AWS ParallelCluster.

To learn more, visit the Amazon EC2 DL2q instance page, and send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.


About the authors

A.K Roy is a Director of Product Management at Qualcomm, for Cloud and Datacenter AI products and solutions. He has over 20 years of experience in product strategy and development, with the current focus of best-in-class performance and performance/$ end-to-end solutions for AI inference in the Cloud, for the broad range of use-cases, including GenAI, LLMs, Auto and Hybrid AI.

Jianying Lang is a Principal Solutions Architect at AWS Worldwide Specialist Organization (WWSO). She has over 15 years of working experience in HPC and AI field. At AWS, she focuses on helping customers deploy, optimize, and scale their AI/ML workloads on accelerated computing instances. She is passionate about combining the techniques in HPC and AI fields. Jianying holds a PhD degree in Computational Physics from the University of Colorado at Boulder.

Source: https://aws.amazon.com/blogs/machine-learning/amazon-ec2-dl2q-instance-for-cost-efficient-high-performance-ai-inference-is-now-generally-available/



You might also like this video

Leave a Reply

Available for Amazon Prime