Phi-3-vision小模型OCR实测

MODEL-ZOO Nov 1, 2024

Phi3 模型是 Microsoft 小型语言模型的最新版本。它有四种变体(有关更多信息,请查看此链接 ):

  • Phi-3-mini。3.8B 参数语言模型,提供两种上下文长度(128K4K
  • Phi-3-small。7B 参数语言模型,提供两种上下文长度(128K 8K
  • Phi-3-medium。14B 参数语言模型,提供两种上下文长度(128K4K
  • Phi-3-vision 是具有语言和视觉功能的 4.2B 参数多模态模型

在这篇文章中,我对多模态视觉语言模型的应用很感兴趣。正如官方文档中所述,Phi-3-Vision-128K-Instruct 是一种轻量级、最先进的开放式多模态模型,可用于具有视觉和文本输入功能的通用 AI 系统和应用程序,这些功能需要:

  • 内存/计算受限环境;
  • 延迟受限场景;
  • 一般图像理解;
  • OCR;
  • 图表和表格理解。

在这篇文章中,我感兴趣的是检查当模型用作身份证、驾驶执照和健康保险卡等个人文件上的 OCR 时的数据提取能力。本次测试中使用的文件是传真件,它们不是原始文件,也不属于真人。

你可以通过此链接在我的 Github 存储库中找到完整的笔记本。

1、模型实例

为了在推理模式下使用该模型,我构建了一个如下环境:

conda create -n llm_images python=3.10

conda activate llm_images

pip install torch==2.3.0 torchvision==0.18.0

pip install packaging

pip install pillow==10.3.0 chardet==5.2.0 flash_attn==2.5.8 accelerate==0.30.1 bitsandbytes==0.43.1 Requests==2.31.0 transformers==4.40.2 albumentations==1.3.1 opencv-contrib-python==4.10.0.84 matplotlib==3.9.0

pip uninstall jupyter

conda install -c anaconda jupyter

conda update jupyter

pip install --upgrade 'nbconvert>=7' 'mistune>=2'

pip install cchardet

环境可用后,我从 Huggingface 存储库下载了模型:

# Import necessary libraries
from PIL import Image
import requests
from transformers import AutoModelForCausalLM
from transformers import AutoProcessor
from transformers import BitsAndBytesConfig
import torch
from IPython.display import display
import time


# Define model ID
model_id = "microsoft/Phi-3-vision-128k-instruct"

# Load processor
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)

# Define BitsAndBytes configuration for 4-bit quantization
nf4_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16,
)

# Load model with 4-bit quantization and map to CUDA
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    trust_remote_code=True,
    torch_dtype="auto",
    quantization_config=nf4_config,
)

接下来,我准备了一个 Python 函数,将消息和图像路径作为输入发送给模型并输出模型输出:

def model_inference(messages, path_image):
    
    start_time = time.time()
    
    image = Image.open(path_image)

    # Prepare prompt with image token
    prompt = processor.tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

    # Process prompt and image for model input
    inputs = processor(prompt, [image], return_tensors="pt").to("cuda:0")

    # Generate text response using model
    generate_ids = model.generate(
        **inputs,
        eos_token_id=processor.tokenizer.eos_token_id,
        max_new_tokens=500,
        do_sample=False,
    )

    # Remove input tokens from generated response
    generate_ids = generate_ids[:, inputs["input_ids"].shape[1] :]

    # Decode generated IDs to text
    response = processor.batch_decode(
        generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
    )[0]


    display(image)
    end_time = time.time()
    print("Inference time: {}".format(end_time - start_time))

    # Print the generated response
    print(response)

接下来我将展示如何从每个不同的文档中提取数据。根据文档的正面或背面,我准备了一个特定的提示,能够识别我想要提取数据的字段。

2、身份证 OCR

对于意大利身份证的正面,我使用以下提示提取主要个人数据并将其放入 JSON 格式输出中:

prompt_cie_front = [{"role": "user", "content": "<|image_1|>\nOCR the text of the image. Extract the text of the following fields and put it in a JSON format: \
'Comune Di/ Municipality', 'COGNOME /Surname', 'NOME/NAME', 'LUOGO E DATA DI NASCITA/\
PLACE AND DATE OF BIRTH', 'SESSO/SEX', 'STATURA/HEIGHT', 'CITADINANZA/NATIONALITY',\
'EMISSIONE/ ISSUING', 'SCADENZA /EXPIRY'. Read the code at the top right and put it in the JSON field 'CODE'"}]

# Download image from URL
path_image = "/home/randellini/llm_images/resources/cie_fronte.jpg"

# inference
model_inference(prompt_cie_front, path_image)
意大利身份证正面

对于上图,我获得了以下输出。值得注意的是,唯一的卡代码位于卡的右上角,没有任何关联字段。为了提取其值,我在提示中指定模型必须读取右上角的代码并将其放入名为“CODE”的 JSON 字段中。唯一的错误是唯一代码中的第一个零已被替换为大写字母 O。

Inference time: 9.793543815612793
{
"Comune Di/ Municipality": "SERENELLA MARITTIMA",
"COGNOME /Surname": "ROSSI",
"NOME/NAME": "BIANCA",
"LUOGO E DATA DI NASCITA": "PINO SULLA SPONDA DEL LAGO MAGGIORE (VA) 30.12.1964",
"SESSO/SEX": "F",
"STATURA/HEIGHT": "180",
"CITADINANZA/NATIONALITY": "ITA",
"EMISSIONE/ ISSUING": "30.05.2022",
"SCADENZA /EXPIRY": "30.12.2031",
"CODE": "CAO000AA"
}

为了提取背面的数据,我使用了以下提示:

prompt_cie_back = [{"role": "user", "content": "<|image_1|>\nOCR the text of the image. Extract the text of the following fields and put it in a JSON format: \
'CODICE FISCALE/FISCAL CODE', 'ESTREMI ATTO DI NASCITA', 'INDIRIZZO DI RESIDENZA/RESIDENCE'"}]

# Download image from URL
path_image = "/home/randellini/llm_images/resources/cie_retro.jpg"

# inference
model_inference(prompt_cie_back, path_image)
意大利身份证背面

我得到了以下结果。只有一个错误,即缺少财政代码的第三个字符,大写字母 S。

Inference time: 4.082342147827148
{
  "codice_fiscale": "RSBNC64T70G677R",
  "estremi_atto_di_nascita": "00000.0A00",
  "indirizzo_di_residenza": "Via Salaria, 712"
}

3、驾照 OCR

对于意大利驾照的正面,我使用了以下提示

prompt_ld_front = [{"role": "user", "content": "<|image_1|>\nOCR the text of the image. Extract the text of the following fields and put it in a JSON format: \
'1.', '2.', '3.', '4a.', '4b.', '4c.', '5.','9.'"}]

# Download image from URL
path_image = "/home/randellini/llm_images/resources/patente_fronte.png"

# inference
model_inference(prompt_ld_front, path_image)
意大利驾驶执照卡正面

获取结果:

Inference time: 5.2030909061431885
{
"1": "ROSSI",
"2": "MARIA",
"3": "01/01/65",
"4a": "01/03/2014",
"4b": "01/01/2025",
"4c": "MIT-UCO",
"5": "A0A000000A",
"9": "B"
}

目前,对于意大利驾照的背面,我还没有找到正确的提示来读取表格中的值,其中有“9.”、“10.”、“11.”和“12.”。此外,“12.”出现了两次。首先,作为表格列的名称,然后作为卡片左下角的字段。

最后一个字段很重要,因为它警告驾驶员必须履行某些义务。例如,代码 01 表示必须戴隐形眼镜或眼镜驾驶。

意大利驾照卡背面

4、健康保险卡 OCR

为了读取意大利健康保险卡正面的值,我使用了提示:

prompt_hic_front = [{"role": "user", "content": "<|image_1|>\nOCR the text of the image. Extract the text of the following fields and put it in a JSON format: \
'Codice Fiscale', 'Sesso', 'Cognome', 'Nome', 'Luogo di nascita', 'Provincia', 'Data di nascita', 'Data di scadenza'"}]

# Download image from URL
path_image = "/home/randellini/llm_images/resources/tessera_sanitaria_fronte.jpg"

# inference
model_inference(prompt_hic_front, path_image)
意大利健康保险卡正面

我得到以下结果:

Inference time: 7.003508806228638
```json
{
  "Codice Fiscale": "RSSMRO62B25E205Y",
  "Sesso": "M",
  "Cognome": "ROSSI",
  "Nome": "MARIO",
  "Luogo di nascita": "CASSINA DE' PECCHI",
  "Provincia": "MI",
  "Data di nascita": "25/02/1962",
  "Data di scadenza": "10/10/2019"
}
```

为了读卡片的背面,我使用了提示:

prompt_hic_back = [{"role": "user", "content": "<|image_1|>\nOCR the text of the image. Extract the text of the following fields and put it in a JSON format: \
'3 Cognome', '4 Nome', '5 Data di nascita', '6 Numero identificativo personale', '7 Numero identificazione dell'istituzione', 'Numero di identificazione della tessera', '9 Scadenza'"}]

# Download image from URL
path_image = "/home/randellini/llm_images/resources/tessera_sanitaria_retro.jpg"

# inference
model_inference(prompt_hic_back, path_image)
意大利健康保险卡背面

获取

Inference time: 7.403932809829712
{
"3 Cognome": "ROSSI",
"4 Nome": "MARIO",
"5 Data di nascita": "25/02/1962",
"6 Numero identificativo personale": "RSSMRO62B25E205Y",
"7 Numero identificazione dell'istituzione": "0030 - LOMBARDIA",
"Numero di identificazione della tessera": "80380800301234567890",
"9 Scadenza": "01/01/2006"
}

原文链接:Exploring the Microsoft Phi3 Vision Language model as OCR for document data extraction

汇智网翻译整理,转载请标明出处

Tags