Giter Site home page Giter Site logo

tamerkhraisha / uspto-patent-data-parser Goto Github PK

View Code? Open in Web Editor NEW
33.0 3.0 10.0 45.76 MB

A python tool for reading, parsing and finding patent using the United States Patent and Trademark (USPTO) Bulk Data Storage System.

License: MIT License

Python 100.00%
patents uspto data parsing-library

uspto-patent-data-parser's Introduction

Storyblok

United States Patent and Trademark (USPTO) Data Parser

A python tool for reading, parsing, and finding patents using the United States Patent and Trademark (USPTO) Bulk Data Storage System. This tool is designed to parse the Patent Grant Full Text Data section, which contains full text of each patent grant issued weekly (Tuesdays) from January 1, 1976 to present (excludes images/drawings).

Requirements

  • Python >= 3.5
  • Pandas
  • Beautifulsoup4 >= 4.6.3

Installation

from uspto import *

Usage

Get a list of the files available for a specific year

file_list = get_patent_files_by_year(2008)

Download file to disk

url = 'https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/1999/pftaps19990406_wk14.zip'
download_file_to_disk(url,'your/target/path')

Read and parse data

1- For the period 1967-2001, files are in txt format and the following data items are can be parsed:

  • INVT: inventor information.
  • ASSG: assignee information.
  • PRIR: foreign priority information.
  • REIS: reissue information.
  • RLAP: related application information.
  • CLAS: classification information.
  • UREF: us reference information(citations).
  • FREF: foreign reference information.
  • OREF: other reference information.
  • LREP: legal information.
  • ABST: abstract information.
  • GOVT: government interest information.
  • PARN: parent case information.
  • BSUM: summary information.
  • DETD: detailed description information.
  • CLMS: claims information.
  • URL : adds the URL to the bibliographic information.
Read and parse full year data (could take several hours).

The code below will read and parse all files for the year 1980 and extract inventor and assignee data.

data = download_yearly_data(1980,['INVT','ASSG'])
Read and parse single file from URL

The code below will parse all patents for the provided zip file (year 1980) in the URL and extract inventor and assignee data. It will read and parse a txt file in memory without downloading it to disk.

url = 'https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/1980/pftaps19801230_wk53.zip'
items = ['INVT','ASSG']
data = read_and_parse_from_url(url,items)
Read and parse single file from disk

The code below reads a txt file (same as in the link above) that has already been downloaded to disk and will return inventor and assignee data.

data = read_and_parse_file_from_disk('path/to/file/pftaps19801230_wk53.txt',['INVT','ASSG'],'txt')

2- For the period 2002-2004, files are in xml-version2 format, and the following items can be parsed

  • INVT: inventor information.
  • ASSG: assignee information.
  • PRIP: foreign priority information.
  • REIS: reissue information.
  • RLAP: related application information.
  • CLAS: classification information.
  • CITA: us reference information (citations).
  • OREF: other reference information.
  • LREP: legal information.
  • ABST: abstract information.
  • GOVT: government interest information.
  • BSUM: summary information.
  • DETD: detailed description information.
  • CLMS: claims information.
  • URL : adds the URL to the bibliographic information.
Read and parse full year data (could take several hours)

The code below will read and parse all files for the year 2002 and extract inventor and assignee data.

data = download_yearly_data(2002,['INVT','ASSG'])
Read and parse single file from URL

The code below will parse all patents for the provided zip file (year 2002) in the URL and extract inventor and assignee data. It will read and parse an xml-version2 file in memory without downloading it to disk.

url = 'https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/2002/pg020101.zip'
items = ['INVT','ASSG']
xb = read_and_parse_from_url(url,items)
Read and parse single file from disk

The code below reads an xml-version2 file (same as in the link above) that has already been downloaded to disk and will return inventor and assignee data.

data = read_and_parse_file_from_disk('path/to/file/pg020101.xml',['INVT','ASSG'],'xml2')

3- For the period 2003-present, files are in xml-version4 format, and the following items can be parsed

  • INVT: inventor information.
  • ASSG: assignee information.
  • PRIP: foreign priority information.
  • CLAS: classification information.
  • CITA: us reference information (citations).
  • OREF: other reference information.
  • LREP: legal information.
  • ABST: abstract information.
  • DETD: detailed description information.
  • CLMS: claims information.
  • URL : adds the URL to the bibliographic information.
Read and parse full year data (could take several hours)

The code below will read and parse all files for the year 2008 and extract inventor and assignee data.

data = download_yearly_data(2008,['INVT','ASSG'])
Read and parse single file from URL

The code below will parse all patents for the provided zip file (year 2008) in the URL and extract inventor and assignee data. It will read and parse an xml-version4 file in memory without downloading it to disk.

url = 'https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/2008/ipg080101.zip'
items = ['INVT','ASSG']
data = read_and_parse_from_url(url,items)
Read and parse single file from disk

The code below reads an xml-version4 file (same as in the link above) that has already been downloaded to disk and will return inventor and assignee data.

data = read_and_parse_file_from_disk('path/to/file/ipg080101.xml',['INVT','ASSG'],'xml4')

uspto-patent-data-parser's People

Contributors

tamerkhraisha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

uspto-patent-data-parser's Issues

Doc-number in application-reference overwrites doc-number in publication-reference

When parsing the bibliographical information, we just insert the keys

    invention_title = root_tree.find(invention_title_path)
    document_data = {}    
    if publication_info != None:
        publication_reference_info = {element.tag: element.text for element in list(publication_info)}
        document_data = {**document_data,**publication_reference_info}
    if application_info !=None:
        application_reference_info = {element.tag: element.text for element in list(application_info)}
        if application_info.attrib and application_info.attrib['appl-type']:
            application_reference_info['application_type'] =  application_info.attrib['appl-type']
        document_data = {**document_data,**application_reference_info}

source

An example patent might look like this (xml4)

<publication-reference>
<document-id>
<country>US</country>
<doc-number>09784948</doc-number>
<kind>B2</kind>
<date>20171010</date>
</document-id>
</publication-reference>
<application-reference appl-type="utility">
<document-id>
<country>US</country>
<doc-number>15067369</doc-number>
<date>20160311</date>
</document-id>
</application-reference>

The resulting dictionary lacks the patent id now, containing only the application id:

[{'bibliographic_information': {'country': 'US', 'doc-number': '15067369', 'kind': 'B2', 'date': '20160311', 'invention_title': 'xxx'}}]

Convert doc-number to patent number?

I noticed the parser returned the doc-number rather than patent number for the patents. Although one can search a patent using doc-number, I cannot find a mapping for doc-number vs. patent number. Do you know how to get the patent number? Thanks!

Suggestion about func "read_and_parse_txt_from_disk"

Moreover, I suggest this func should be changed like this, because I meet the encoding problem:

def read_and_parse_txt_from_disk(path_to_file,data_items):
    try:
        with open(path_to_file,'r',encoding='utf-8') as f:
            txt = f.read()
    except:
        with open(path_to_file,'r',encoding='latin1') as f:
            txt = f.read()
    txt = txt.split('\n')
    raw_patent_data= get_patents_list(txt)
    parsed_data = []
    for patent in raw_patent_data:
        parsed_data.append(parse_txt_patent_data(patent,data_items_list = data_items))
    return parsed_data

Parse 1998 data error

Hello coder, when I try to parse the data of 1998, there's an error, the func "def get_patents_list" will return a null list, if I change the code to this:

def get_patents_list(patents_txt_data):
    patents_data = []
    current_patent = []
    for line in patents_txt_data[1:]:
        cleaned_line = ' '.join(line.split())
        if cleaned_line.startswith('PATN'):
            if current_patent:
                patents_data.append(current_patent)
                current_patent = []
            current_patent.append(cleaned_line)
        else:
            current_patent.append(cleaned_line)
    if current_patent:
        patents_data.append(current_patent)
    for i in range(len(patents_data)):
        patent = patents_data[i]
        patents_data[i] = [[word for word in line.split() if word] for line in patent]
    return patents_data

Then It works.
However, it only fits 1998, when I try to use the new func to parse 1999, it didn't.
I guess you must didn't test all the years, so can you help me to solve this problem and make the code more strong? Thank you a lot.

Only the first line of claim text is read in

When looking for claim data, only the first line of claim data is ingested.

Claims can contain many lines of text. An example:

<claim id="CLM-00001" num="00001">
<claim-text>1. An imaging lens system including, in order from an object side to an image side:
<claim-text>a first lens element having a concave image-side surface;</claim-text>
<claim-text>a second lens element;</claim-text>
<claim-text>a third lens element with negative refractive power having a convex object-side surface and a concave image-side surface, the object-side and image-side surfaces thereof being aspheric;</claim-text>
<claim-text>a fourth lens element with positive refractive power having a convex image-side surface; and</claim-text>
<claim-text>a fifth lens element with negative refractive power having a convex object-side surface and a concave image-side surface, the object-side and image-side surfaces thereof being aspheric, each of the object-side and image-side surfaces thereof being provided with at least one inflection point;</claim-text>
<claim-text>wherein there are a total of five lens elements in the imaging lens system, and a gap exists between every two adjacent lens elements along an optical axis of the imaging lens system.</claim-text>
</claim-text>
</claim>

Results in claim data being the following:

'claim_information': [{'id': 'CLM-00001', 'num': '00001', 'claim_text': ['1. An imaging lens system including, in order from an object side to an image side:\n']}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.