Giter Site home page Giter Site logo

ropensci / npi Goto Github PK

View Code? Open in Web Editor NEW
28.0 4.0 10.0 1.53 MB

Access the U.S. National Provider Identifier (NPI) Registry Public Search API

Home Page: https://docs.ropensci.org/npi/

License: Other

R 100.00%
api-wrapper r npi-number healthcare health-data r-package rstats

npi's Introduction

npi

Access the U.S. National Provider Identifier Registry API

Status at rOpenSci Software Peer Review Project Status: Active – The project has reached a stable, usable state and is being actively developed. lifecycle R-CMD-check Codecov test coverage DOI CRAN status

Use R to access the U.S. National Provider Identifier (NPI) Registry API (v2.1) by the Center for Medicare and Medicaid Services (CMS): https://npiregistry.cms.hhs.gov/. Obtain rich administrative data linked to a specific individual or organizational healthcare provider, or perform advanced searches based on provider name, location, type of service, credentials, and many other attributes. npi provides convenience functions for data extraction so you can spend less time wrangling data and more time putting data to work.

Analysts working with healthcare and public health data frequently need to join data from multiple sources to answer their business or research questions. Unfortunately, joining data in healthcare is hard because so few entities have unique, consistent identifiers across organizational boundaries. NPI numbers, however, do not suffer from these limitations, as all U.S. providers meeting certain common criteria must have an NPI number in order to be reimbursed for the services they provide. This makes NPI numbers incredibly useful for joining multiple datasets by provider, which is the primary motivation for developing this package.

Installation

There are three ways to install the npi package:

  1. Install from CRAN:
install.packages("npi")
library(npi)
  1. Install from R-universe:
install.packages("npi", repos = "https://ropensci.r-universe.dev")
library(npi)
  1. Install from GitHub using the devtools package:
devtools::install_github("ropensci/npi")
library(npi)

Usage

npi exports four functions, all of which match the pattern “npi_*“:

  • npi_search(): Search the NPI Registry and return the response as a tibble with high-cardinality data organized into list columns.
  • npi_summarize(): A method for displaying a nice overview of results from npi_search().
  • npi_flatten(): A method for flattening one or more list columns from a search result, joined by NPI number.
  • npi_is_valid(): Check the validity of one or more NPI numbers using the official NPI enumeration standard.

Search the registry

npi_search() exposes nearly all of the NPPES API’s search parameters. Let’s say we wanted to find up to 10 providers with primary locations in New York City:

nyc <- npi_search(city = "New York City")
# Your results may differ since the data in the NPPES database changes over time
nyc
#> # A tibble: 10 × 11
#>       npi enume…¹ basic    other_…² identi…³ taxono…⁴ addres…⁵ practi…⁶ endpoi…⁷
#>  *  <int> <chr>   <list>   <list>   <list>   <list>   <list>   <list>   <list>  
#>  1 1.19e9 Indivi… <tibble> <tibble> <tibble> <tibble> <tibble> <tibble> <tibble>
#>  2 1.31e9 Indivi… <tibble> <tibble> <tibble> <tibble> <tibble> <tibble> <tibble>
#>  3 1.64e9 Indivi… <tibble> <tibble> <tibble> <tibble> <tibble> <tibble> <tibble>
#>  4 1.35e9 Indivi… <tibble> <tibble> <tibble> <tibble> <tibble> <tibble> <tibble>
#>  5 1.56e9 Indivi… <tibble> <tibble> <tibble> <tibble> <tibble> <tibble> <tibble>
#>  6 1.79e9 Indivi… <tibble> <tibble> <tibble> <tibble> <tibble> <tibble> <tibble>
#>  7 1.56e9 Indivi… <tibble> <tibble> <tibble> <tibble> <tibble> <tibble> <tibble>
#>  8 1.96e9 Organi… <tibble> <tibble> <tibble> <tibble> <tibble> <tibble> <tibble>
#>  9 1.43e9 Indivi… <tibble> <tibble> <tibble> <tibble> <tibble> <tibble> <tibble>
#> 10 1.33e9 Indivi… <tibble> <tibble> <tibble> <tibble> <tibble> <tibble> <tibble>
#> # … with 2 more variables: created_date <dttm>, last_updated_date <dttm>, and
#> #   abbreviated variable names ¹​enumeration_type, ²​other_names, ³​identifiers,
#> #   ⁴​taxonomies, ⁵​addresses, ⁶​practice_locations, ⁷​endpoints

The full search results have four regular vector columns, npi, enumeration_type, created_date, and last_updated_date and seven list columns. Each list column is a collection of related data:

  • basic: Basic profile information about the provider
  • other_names: Other names used by the provider
  • identifiers: Other provider identifiers and credential information
  • taxonomies: Service classification and license information
  • addresses: Location and mailing address information
  • practice_locations: Provider’s practice locations
  • endpoints: Details about provider’s endpoints for health information exchange

A full list of the possible fields within these list columns can be found on the NPPES API Help page.

If you’re comfortable working with list columns, this may be all you need from the package. However, npi also provides functions that can help you summarize and transform your search results.

Working with search results

npi has two main helper functions for working with search results: npi_summarize() and npi_flatten().

Summarizing results

Run npi_summarize() on your results to see a more human-readable overview of your search results. Specifically, the function returns the NPI number, provider’s name, enumeration type (individual or organizational provider), primary address, phone number, and primary taxonomy (area of practice):

npi_summarize(nyc)
#> # A tibble: 10 × 6
#>           npi name                                 enume…¹ prima…² phone prima…³
#>         <int> <chr>                                <chr>   <chr>   <chr> <chr>  
#>  1 1194276360 ALYSSA COWNAN                        Indivi… 5 E 98… 212-… Physic…
#>  2 1306849641 MARK MOHRMANN                        Indivi… 16 PAR… 212-… Orthop…
#>  3 1639173065 SAKSHI DUA                           Indivi… 10 E 1… 212-… Nurse …
#>  4 1346604592 SARAH LOWRY                          Indivi… 1335 D… 614-… Occupa…
#>  5 1558362566 AMY TIERSTEN                         Indivi… 1176 5… 212-… Psychi…
#>  6 1790786416 NOAH GOLDMAN                         Indivi… 140 BE… 973-… Intern…
#>  7 1558713628 ROBYN NOHLING                        Indivi… 9 HOPE… 781-… Nurse …
#>  8 1962983775 LENOX HILL MEDICAL ANESTHESIOLOGY, … Organi… 100 E … 212-… Intern…
#>  9 1427454529 YONGHONG TAN                         Indivi… 34 MAP… 203-… Obstet…
#> 10 1326403213 RAJEE KRAUSE                         Indivi… 12401 … 347-… Nurse …
#> # … with abbreviated variable names ¹​enumeration_type,
#> #   ²​primary_practice_address, ³​primary_taxonomy

Flattening results

As seen above, the data frame returned by npi_search() has a nested structure. Although all the data in a single row relates to one NPI, each list column contains a list of one or more values corresponding to the NPI for that row. For example, a provider’s NPI record may have multiple associated addresses, phone numbers, taxonomies, and other attributes, all of which live in the same row of the data frame.

Because nested structures can be a little tricky to work with, the npi includes npi_flatten(), a function that transforms the data frame into a flatter (i.e., unnested and merged) structure that’s easier to use. npi_flatten() performs the following transformations:

  • unnest the list columns
  • prefix the name of each unnested column with the name of its original list column
  • left-join the data together by NPI

npi_flatten() supports a variety of approaches to flattening the results from npi_search(). One extreme is to flatten everything at once:

npi_flatten(nyc)
#> # A tibble: 48 × 42
#>           npi basic_fi…¹ basic…² basic…³ basic…⁴ basic…⁵ basic…⁶ basic…⁷ basic…⁸
#>         <int> <chr>      <chr>   <chr>   <chr>   <chr>   <chr>   <chr>   <chr>  
#>  1 1194276360 ALYSSA     COWNAN  PA      NO      F       2016-1… 2018-0… A      
#>  2 1194276360 ALYSSA     COWNAN  PA      NO      F       2016-1… 2018-0… A      
#>  3 1306849641 MARK       MOHRMA… MD      NO      M       2005-0… 2019-0… A      
#>  4 1306849641 MARK       MOHRMA… MD      NO      M       2005-0… 2019-0… A      
#>  5 1306849641 MARK       MOHRMA… MD      NO      M       2005-0… 2019-0… A      
#>  6 1306849641 MARK       MOHRMA… MD      NO      M       2005-0… 2019-0… A      
#>  7 1326403213 RAJEE      KRAUSE  AGPCNP… NO      F       2015-1… 2019-0… A      
#>  8 1326403213 RAJEE      KRAUSE  AGPCNP… NO      F       2015-1… 2019-0… A      
#>  9 1326403213 RAJEE      KRAUSE  AGPCNP… NO      F       2015-1… 2019-0… A      
#> 10 1326403213 RAJEE      KRAUSE  AGPCNP… NO      F       2015-1… 2019-0… A      
#> # … with 38 more rows, 33 more variables: basic_name <chr>,
#> #   basic_name_prefix <chr>, basic_middle_name <chr>,
#> #   basic_organization_name <chr>, basic_organizational_subpart <chr>,
#> #   basic_authorized_official_credential <chr>,
#> #   basic_authorized_official_first_name <chr>,
#> #   basic_authorized_official_last_name <chr>,
#> #   basic_authorized_official_middle_name <chr>, …

However, due to the number of fields and the large number of potential combinations of values, this approach is best suited to small datasets. More likely, you’ll want to flatten a small number of list columns from the original data frame in one pass, repeating the process with other list columns you want and merging after the fact. For example, to flatten basic provider and provider taxonomy information, supply the corresponding list columns as a vector of names to the cols argument:

# Flatten basic provider info and provider taxonomy, preserving the relationship
# of each to NPI number and discarding other list columns.
npi_flatten(nyc, cols = c("basic", "taxonomies"))
#> # A tibble: 20 × 26
#>           npi basic_fi…¹ basic…² basic…³ basic…⁴ basic…⁵ basic…⁶ basic…⁷ basic…⁸
#>         <int> <chr>      <chr>   <chr>   <chr>   <chr>   <chr>   <chr>   <chr>  
#>  1 1194276360 ALYSSA     COWNAN  PA      NO      F       2016-1… 2018-0… A      
#>  2 1306849641 MARK       MOHRMA… MD      NO      M       2005-0… 2019-0… A      
#>  3 1306849641 MARK       MOHRMA… MD      NO      M       2005-0… 2019-0… A      
#>  4 1326403213 RAJEE      KRAUSE  AGPCNP… NO      F       2015-1… 2019-0… A      
#>  5 1326403213 RAJEE      KRAUSE  AGPCNP… NO      F       2015-1… 2019-0… A      
#>  6 1326403213 RAJEE      KRAUSE  AGPCNP… NO      F       2015-1… 2019-0… A      
#>  7 1346604592 SARAH      LOWRY   OTR/L   YES     F       2016-0… 2018-0… A      
#>  8 1346604592 SARAH      LOWRY   OTR/L   YES     F       2016-0… 2018-0… A      
#>  9 1427454529 YONGHONG   TAN     <NA>    NO      F       2014-1… 2018-1… A      
#> 10 1558362566 AMY        TIERST… M.D.    YES     F       2005-0… 2019-0… A      
#> 11 1558713628 ROBYN      NOHLING FNP-BC… YES     F       2016-0… 2018-0… A      
#> 12 1558713628 ROBYN      NOHLING FNP-BC… YES     F       2016-0… 2018-0… A      
#> 13 1558713628 ROBYN      NOHLING FNP-BC… YES     F       2016-0… 2018-0… A      
#> 14 1558713628 ROBYN      NOHLING FNP-BC… YES     F       2016-0… 2018-0… A      
#> 15 1558713628 ROBYN      NOHLING FNP-BC… YES     F       2016-0… 2018-0… A      
#> 16 1558713628 ROBYN      NOHLING FNP-BC… YES     F       2016-0… 2018-0… A      
#> 17 1639173065 SAKSHI     DUA     M.D.    YES     F       2005-0… 2019-0… A      
#> 18 1639173065 SAKSHI     DUA     M.D.    YES     F       2005-0… 2019-0… A      
#> 19 1790786416 NOAH       GOLDMAN M.D.    NO      M       2005-0… 2018-0… A      
#> 20 1962983775 <NA>       <NA>    <NA>    <NA>    <NA>    2018-0… 2018-0… A      
#> # … with 17 more variables: basic_name <chr>, basic_name_prefix <chr>,
#> #   basic_middle_name <chr>, basic_organization_name <chr>,
#> #   basic_organizational_subpart <chr>,
#> #   basic_authorized_official_credential <chr>,
#> #   basic_authorized_official_first_name <chr>,
#> #   basic_authorized_official_last_name <chr>,
#> #   basic_authorized_official_middle_name <chr>, …

Validating NPIs

Just like credit card numbers, NPI numbers can be mistyped or corrupted in transit. Likewise, officially-issued NPI numbers have a check digit for error-checking purposes. Use npi_is_valid() to check whether an NPI number you’ve encountered is validly constructed:

# Validate NPIs
npi_is_valid(1234567893)
#> [1] TRUE
npi_is_valid(1234567898)
#> [1] FALSE

Note that this function doesn’t check whether the NPI numbers are activated or deactivated (see #22). It merely checks for the number’s consistency with the NPI specification. As such, it can help you detect and handle data quality issues early.

Set your own user agent

A user agent is a way for the software interacting with an API to tell it who or what is making the request. This helps the API’s maintainers understand what systems are using the API. By default, when npi makes a request to the NPPES API, the request header references the name of the package and the URL for the repository (e.g., ‘npi/0.2.0 (https://github.com/ropensci/npi)’). If you want to set a custom user agent, update the value of the npi_user_agent option. For example, for version 1.0.0 of an app called “my_app”, you could run the following code:

options(npi_user_agent = "my_app/1.0.0")

Package Website

npi has a website with release notes, documentation on all user functions, and examples showing how the package can be used.

Reporting Bugs

Did you spot a bug? I’d love to hear about it at the issues page.

Code of Conduct

Please note that this package is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Contributing

Interested in learning how you can contribute to npi? Head over to the contributor guide—and thanks for considering!

How to cite this package

For the latest citation, see the Authors and Citation page on the package website.

License

MIT (c) Frank Farach

This package’s logo is licensed under CC BY-SA 4.0 and co-created by Frank Farach and Sam Parmar. The logo uses a modified version of an image of the Rod of Asclepius and a magnifying glass that is attributed to Evanherk, GFDL.

npi's People

Contributors

frankfarach avatar parmsam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

npi's Issues

Fix tidyr warning

Note: This is a bug related to #24.

From ropensci/software-review#505 (comment):

At several occasion we face this message:

Warning message:
The `.sep` argument of `unnest()` is deprecated as of tidyr 1.0.0.
Use `names_sep = '_'` instead.
while there is a function to check for installed tidyr version
tidyr_new_interface().

On my computer I had the error when using npi_flatten() even though I had
tidyr v.1.2.0 installed

Allow response paging

From the NPPES API help page:

An API query will return a maximum of 200 results per request. The Skip field in the API will let you skip up to 1000 records. By using these two fields with your search criteria, you can get up to a maximum of 1,200 records over six requests.

Unable to 'skip' records

Hi Frank -- this is a fantastic project. Thank you very much for making it available.
I am having a hard time using the skip field (I need it to build out a list of all the providers in D.C. -- not just the first 200.
Creating the first df works fine. It's only from the second one on that I am having issues. Thanks!

dc_200 <- npi_search(city = "Washington",
                     state = "DC",
                     limit = 200)

dc_400 <- npi_search(city = "Washington",
                     state = "DC",
                     limit = 200,
                     skip = 200)
#> Error in npi_search(city = "Washington", state = "DC", limit = 200, skip = 200) : unused argument (skip = 200)

dc_600 <- npi_search(city = "Washington",
                     state = "DC",
                     limit = 200,
                     skip = 400)
#> Error in npi_search(city = "Washington", state = "DC", limit = 200, skip = 400) : unused argument (skip = 400)

dc_800 <- npi_search(city = "Washington",
                     state = "DC",
                     limit = 200,
                     skip = 600)
#> Error in npi_search(city = "Washington", state = "DC", limit = 200, skip = 600) : unused argument (skip = 600)

Deactivated NPIs

@frankfarach Thank you very much for creating this package. I only discovered it after I miserably failed to create a function of my own.

I am passing about 3000 call to the API and some of these NPIs appear to be deactivated

Is there an approach that can be used to handle deactivated NPIs?

The deactivated NPIs appear to be valid when I pass them to npi_is_valid()

An example of a deactivated NPI is 1710983663

This is an example of what I am trying to run

examples <- data.frame(names = c(1003060377,
                                # 1710983663, this is a deactivated NPI
                                 1003213240,
                                 1003116930,
                                 1003020306,
                                 1003292350,
                                 1003094988,
                                 1003164716,
                                 1003156324,
                                 1003219981))

result <- vector('list', nrow(examples))

for(i in seq(nrow(examples))) {
  #Sleep for 1 minute after every 2 values
  if(i %% 2 == 0) Sys.sleep(5)
  result[[i]] <- npi::npi_search(examples$names[i])
}

x <- bind_rows(result) %>%
  select(npi,addresses) %>% 
  unnest(addresses) %>% 
  filter(address_purpose == "LOCATION") %>% 
  select(npi,
         address_1,
         address_2,
         city,
         state,
         postal_code)

Clarify description of npis dataset

From ropensci/software-review#505 (comment):

The description of npis dataset doesn't seem to fit the dataset as it specifies "list of 0-n tibbles" for one column. It is not clear to me how to read it. Though I think it's nice to indicate the type of element. Maybe a way to make it more explicit would be to indicate "list-column of tibbles with X rows and Y columns".

Clarify API rate limitation

From ropensci/software-review#505 (comment):

Is there a form of rate limitation? It seems reasonable given that we access an API. Nothing is shown from the API standpoint nor the package? (I've seen it mentioned in the "Getting started" vignette but it could be more explicit). If it is the case, that could be a nice thing to mention to the user so that they are aware of the time it would take to get their results back.

Do this in at least two places: README.md and vignettes

User agent documentation and customization

From ropensci/software-review#505 (comment):

It's great that the package provides a user-agent. It has been proven to be the best way possible to access an API to provide
a proof of who is accessing the API.

In the README there is a mention of the user agent (https://github.com/frankfarach/npi/blob/d4e98a52ccc0a7f71328582333e5b4191f4796b9/README.Rmd#L114-L120). Which seems a great idea. However, this does not consider that the user may not be familiar with the concept of user-agent. Also, the displayed example doesn't show in which way it can be helpful to change its user-agent. Maybe you could go to a more concrete example. I also wonder if it is a good idea to let the user entirely remove the reference to the package.

To improve the user-agent, which only points to the URL of the repo, it would be good to indicate the version npi in use like it is done in taxize. If it is needed for the user to modify its user agent, then maybe providing a fixed part of the UA with the version of the package and a customizable second part would be even more complete.

Add CodeMeta JSON file to package

Add CodeMeta JSON file to repo to make it easier to discover.

Acceptance criteria:

  1. repo contains codemeta.json
  2. codemeta.json is newer than DESCRIPTION

Idea for future development:

  • Consider adding a pre-commit hook in another issue to require that acceptance criterion 2 above be met prior to committing to the master branch.

Handle wildcard special cases

From ropensci/software-review#505 (comment):

some special queries with wildcards used improperly (not trailing) could be caught earlier. I had strange results with some.

reprex:

library("npi")

# Trying to mess up with wildcards
ko = npi_search(last_name = "M*ll*")
#> Requesting records 0-10...
ko$basic  # The answer has nothing to do with the pattern
#> [[1]]
#> # A tibble: 1 x 11
#>   first_name last_name middle_name credential sole_proprietor gender
#>   <chr>      <chr>     <chr>       <chr>      <chr>           <chr> 
#> 1 JENNY      ENSTROM   E           PA         NO              F     
#> # ... with 5 more variables: enumeration_date <chr>, last_updated <chr>,
#> #   status <chr>, name <chr>, certification_date <chr>

# Wildcards in the middle do not work
ko = npi_search(last_name = "M*ll")
#> Requesting records 0-10...
ko$basic  # And I get the same answer instead of an error?!
#> [[1]]
#> # A tibble: 1 x 11
#>   first_name last_name middle_name credential sole_proprietor gender
#>   <chr>      <chr>     <chr>       <chr>      <chr>           <chr> 
#> 1 JENNY      ENSTROM   E           PA         NO              F     
#> # ... with 5 more variables: enumeration_date <chr>, last_updated <chr>,
#> #   status <chr>, name <chr>, certification_date <chr>

Created on 2022-04-04 by the reprex package (v2.0.1)

Normalize credentials

Credentials should be:

  • uppercase
  • without commas or periods (hyphens allowed)
  • a list column with one credential per list element

Handle illegal special characters in query parameters

From ropensci/software-review#505 (comment):

Character arguments of npi_search() allow for some special character as specified by the documentation. But when searching with other special characters, the query is still submitted.

reprex:

library("npi")
npi_search(first_name = "KOŒ*")
#> Requesting records 0-10...
#> Error in `npi_handle_response()`:
#> ! 
#> Field: first_name
#> Field contains special character(s) or wrong number of characters

Created on 2022-04-04 by the reprex package (v2.0.1)

It works well but it fallback on the API when this is the case, I think it could be possible to catch these issues early when processing these arguments to save time and additional queries.

Improve error message when internet off or endpoint unreachable

From ropensci/software-review#505 (comment):

Because this is a recurrent theme for API packages (and CRAN asks them to "fail gracefully" in these cases), I've checked what happens when I turn of the internet and I don't think it is that explicit for the user.

> npi_search(city = "San Francisco")
Requesting records 0-10...
Error in curl::curl_fetch_memory(url, handle = handle) : 
  Could not resolve host: npiregistry.cms.hhs.gov

Maybe there could be a way to display a better error message when internet is off or the website unreachable?
Like check first hand if internet is off and errors explicitly if it is the case.

Can't use `npi_summarize()` with country code

From ropensci/software-review#505 (comment):

I've found an issue when querying by country code which returns slightly differently formatted query and as such renders npi_summarize() unusable.

reprex:

library("npi")
ki = npi_search(country_code = "DE")
#> Requesting records 0-10...
npi_summarize(ki)
#> Error:
#> ! Tibble columns must have compatible sizes.
#> * Size 10: Existing data.
#> * Size 12: Column `primary_taxonomy`.
#> i Only values of size one are recycled.
#> Run `rlang::last_error()` to see where the error occurred.

Maybe npi_summarize() should take into account this edge case (even if I suppose there are not that many American health professionals outside the US).

Fix deprecated .sep argument in calls to tidyr::unnest()

npi::npi_summarize(npi::npi_search(city = "New York City"))
#> Requesting records 0-10...
#> Warning: The `.sep` argument of `unnest()` is deprecated as of tidyr 1.0.0.
#> Use `names_sep = '_'` instead.
#> This warning is displayed once every 8 hours.
#> Call `lifecycle::last_lifecycle_warnings()` to see where this warning was generated.
#> # A tibble: 10 × 6
#>           npi name      enumeration_type primary_practic… phone primary_taxonomy
#>         <int> <chr>     <chr>            <chr>            <chr> <chr>           
#>  1 1598295529 MUHAMMAD… Individual       475 SEAVIEW AVE… 718-… Student in an O…
#>  2 1710977137 JOHN SCR… Individual       115 WEST 27TH S… 212-… Social Worker C…
#>  3 1346224904 NICULAE … Organization     10 EAST 38TH ST… 212-… Internal Medici…
#>  4 1992776843 NANCY RA… Individual       312 E 94 ST, NE… 212-… Nurse Practitio…
#>  5 1770554206 LYNN KEP… Individual       205 E 64 ST SUI… 212-… Physician Assis…
#>  6 1083687214 SCOTT RI… Individual       33 WEST 42ND ST… 212-… Optometrist     
#>  7 1588637987 CHUNG SO… Individual       901 SIXTH AVENU… 212-… Optometrist     
#>  8 1851366066 BENJAMIN… Individual       150 E 37TH ST A… 212-… Psychiatry & Ne…
#>  9 1629046321 KENNETH … Individual       530 FIRST AVE S… 212-… Otolaryngology  
#> 10 1366405755 HAROLD O… Individual       1737 YORK AVE S… 212-… Dentist

Created on 2022-02-23 by the reprex package (v2.0.1)

NPPES API v2.1 compatibility: Query parameters

Update this package so the query parameters work with the latest version (2.1) of the NPPES registry API. Prior versions will be deprecated in September 2019:

use_first_name_alias (Version 2.1): This field only applies to Individual Providers when not doing a wildcard search. When set to "True", the search results will include Providers with similar First Names. E.g., first_name=Robert, will also return Providers with the first name of Rob, Bob, Robbie, Bobby, etc. Valid Values are:
True: Will include alias/similar names.
False: Will only look for exact matches.
Default Value is True

address_purpose (Version 2.0 and After): Refers to whether the address information entered pertains to the provider's Mailing Address or the provider's Practice Location Address. When not specified, the results will contain the providers where either the Mailing Address or any of Practice Location Addresses match the entered address information. PRIMARY will only search against Primary Location Address. While Secondary will only search against Secondary Location Addresses. Valid values are:
LOCATION
MAILING
PRIMARY
SECONDARY

postal_code (Version 2.1): The Postal Code associated with the provider's address identified in Address Purpose. If you enter a 5 digit postal code, it will match any appropriate 9 digit (zip+4) codes in the data. Trailing wildcard entries are permitted requiring at least two characters to be entered (e.g., "21*").

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.