Giter Site home page Giter Site logo

prioritizr / wdpar Goto Github PK

View Code? Open in Web Editor NEW
36.0 3.0 4.0 30.6 MB

Interface to the World Database on Protected Areas

Home Page: https://prioritizr.github.io/wdpar

License: GNU General Public License v3.0

Makefile 2.94% R 86.74% TeX 10.32%
r conservation data spatial database biodiversity protected-areas cran r-package rstats

wdpar's Introduction

prioritizr

Systematic Conservation Prioritization in R

lifecycle R-CMD-check-Ubuntu R-CMD-check-Windows R-CMD-check-macOS Documentation Coverage Status CRAN-Status-Badge

The prioritizr R package uses mixed integer linear programming (MILP) techniques to provide a flexible interface for building and solving conservation planning problems. It supports a broad range of objectives, constraints, and penalties that can be used to custom-tailor conservation planning problems to the specific needs of a conservation planning exercise. Once built, conservation planning problems can be solved using a variety of commercial and open-source exact algorithm solvers. In contrast to the algorithms conventionally used to solve conservation problems, such as heuristics or simulated annealing, the exact algorithms used here are guaranteed to find optimal solutions. Furthermore, conservation problems can be constructed to optimize the spatial allocation of different management actions or zones, meaning that conservation practitioners can identify solutions that benefit multiple stakeholders. Finally, this package has the functionality to read input data formatted for the Marxan conservation planning program, and find much cheaper solutions in a much shorter period of time than Marxan.

Installation

Official version

The latest official version of the prioritizr R package can be installed from the Comprehensive R Archive Network (CRAN) using the following R code.

install.packages("prioritizr", repos = "https://cran.rstudio.com/")

Developmental version

The latest development version can be installed to gain access to new functionality that is not yet present in the latest official version. Please note that the developmental version is more likely to contain coding errors than the official version. To install the developmental version, you can install it directly from the GitHub online code repository or from the R Universe. In general, we recommend installing the developmental version from the R Universe. This is because installation via R Universe does not require any additional software (e.g., RTools for Windows systems, or Xcode and gfortran for macOS systems).

  • To install the latest development version from R Universe, use the following R code.

    install.packages(
      "prioritizr",
      repos = c(
        "https://prioritizr.r-universe.dev",
        "https://cloud.r-project.org"
      )
    )
  • To install the latest development version from GitHub, use the following R code.

    if (!require(remotes)) install.packages("remotes")
    remotes::install_github("prioritizr/prioritizr")

Citation

Please cite the prioritizr R package when using it in publications. To cite the latest official version, please use:

Hanson JO, Schuster R, Morrell N, Strimas-Mackey M, Edwards BPM, Watts ME, Arcese P, Bennett J, Possingham HP (2023). prioritizr: Systematic Conservation Prioritization in R. R package version 8.0.3. Available at https://CRAN.R-project.org/package=prioritizr.

Alternatively, to cite the latest development version, please use:

Hanson JO, Schuster R, Morrell N, Strimas-Mackey M, Edwards BPM, Watts ME, Arcese P, Bennett J, Possingham HP (2024). prioritizr: Systematic Conservation Prioritization in R. R package version 8.0.3.5. Available at https://github.com/prioritizr/prioritizr.

Additionally, we keep a record of publications that use the prioritizr R package. If you use this package in any reports or publications, please file an issue on GitHub so we can add it to the record.

Usage

Here we provide a short example showing how the prioritizr R package can be used to build and solve conservation problems. Specifically, we will use an example dataset available through the prioritizrdata R package. Additionally, we will use the terra R package to perform raster calculations. To begin with, we will load the packages.

# load packages
library(prioritizr)
library(prioritizrdata)
library(terra)

We will use the Washington dataset in this example. To import the planning unit data, we will use the get_wa_pu() function. Although the prioritizr R package can support many different types of planning unit data, here our planning units are represented as a single-layer raster (i.e., terra::rast() object). Each cell represents a different planning unit, and cell values denote land acquisition costs. Specifically, there are 10757 planning units in total (i.e., cells with non-missing values).

# import planning unit data
wa_pu <- get_wa_pu()

# preview data
print(wa_pu)
## class       : SpatRaster 
## dimensions  : 109, 147, 1  (nrow, ncol, nlyr)
## resolution  : 4000, 4000  (x, y)
## extent      : -1816382, -1228382, 247483.5, 683483.5  (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs 
## source      : wa_pu.tif 
## name        :         cost 
## min value   :    0.2986647 
## max value   : 1804.1838379
# plot data
plot(wa_pu, main = "Costs", axes = FALSE)

Next, we will use the get_wa_features() function to import the conservation feature data. Although the prioritizr R package can support many different types of feature data, here our feature data are represented as a multi-layer raster (i.e., terra::rast() object). Each layer describes the spatial distribution of a feature. Here, our feature data correspond to different bird species. To account for migratory patterns, the breeding and non-breeding distributions of species are represented as different features. Specifically, the cell values denote the relative abundance of individuals, with higher values indicating greater abundance.

# import feature data
wa_features <- get_wa_features()

# preview data
print(wa_features)
## class       : SpatRaster 
## dimensions  : 109, 147, 396  (nrow, ncol, nlyr)
## resolution  : 4000, 4000  (x, y)
## extent      : -1816382, -1228382, 247483.5, 683483.5  (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs 
## source      : wa_features.tif 
## names       : Recur~ding), Botau~ding), Botau~ding), Corvu~ding), Corvu~ding), Cincl~full), ... 
## min values  :       0.000,       0.000,       0.000,       0.000,       0.000,        0.00, ... 
## max values  :       0.514,       0.812,       3.129,       0.115,       0.296,        0.06, ...
# plot the first nine features
plot(wa_features[[1:9]], nr = 3, axes = FALSE)

Let’s make sure that you have a solver installed on your computer. This is important so that you can use optimization algorithms to generate spatial prioritizations. If this is your first time using the prioritizr R package, please install the HiGHS solver using the following R code. Although the HiGHS solver is relatively fast and easy to install, please note that you’ll need to install the Gurobi software suite and the gurobi R package for best performance (see the Gurobi Installation Guide for details).

# if needed, install HiGHS solver
install.packages("highs", repos = "https://cran.rstudio.com/")

Now, let’s generate a spatial prioritization. To ensure feasibility, we will set a budget. Specifically, the total cost of the prioritization will represent a 5% of the total land value in the study area. Given this budget, we want the prioritization to increase feature representation, as much as possible, so that each feature would, ideally, have 20% of its distribution covered by the prioritization. In this scenario, we can either purchase all of the land inside a given planning unit, or none of the land inside a given planning unit. Thus we will create a new problem() that will use a minimum shortfall objective (via add_min_shortfall_objective()), with relative targets of 20% (via add_relative_targets()), binary decisions (via add_binary_decisions()), and specify that we want near-optimal solutions (i.e., 10% from optimality) using the best solver installed on our computer (via add_default_solver()).

# calculate budget
budget <- terra::global(wa_pu, "sum", na.rm = TRUE)[[1]] * 0.05

# create problem
p1 <-
  problem(wa_pu, features = wa_features) %>%
  add_min_shortfall_objective(budget) %>%
  add_relative_targets(0.2) %>%
  add_binary_decisions() %>%
  add_default_solver(gap = 0.1, verbose = FALSE)

# print problem
print(p1)
## A conservation problem (<ConservationProblem>)
## ├•data
## │├•features:    "Recurvirostra americana (breeding)" , … (396 total)
## │└•planning units:
## │ ├•data:       <SpatRaster> (10757 total)
## │ ├•costs:      continuous values (between 0.2987 and 1804.1838)
## │ ├•extent:     -1816381.6182, 247483.5211, -1228381.6182, 683483.5211 (xmin, ymin, xmax, ymax)
## │ └•CRS:        +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs (projected)
## ├•formulation
## │├•objective:   minimum shortfall objective (`budget` = 8748.4908)
## │├•penalties:   none specified
## │├•targets:     relative targets (between 0.2 and 0.2)
## │├•constraints: none specified
## │└•decisions:   binary decision
## └•optimization
##  ├•portfolio:   shuffle portfolio (`number_solutions` = 1, …)
##  └•solver:      gurobi solver (`gap` = 0.1, `time_limit` = 2147483647, `first_feasible` = FALSE, …)
## # ℹ Use `summary(...)` to see complete formulation.

After we have built a problem(), we can solve it to obtain a solution.

# solve the problem
s1 <- solve(p1)

# extract the objective
print(attr(s1, "objective"))
## solution_1 
##    4.40521
# extract time spent solving the problem
print(attr(s1, "runtime"))
## solution_1 
##      3.394
# extract state message from the solver
print(attr(s1, "status"))
## solution_1 
##  "OPTIMAL"
# plot the solution
plot(s1, main = "Solution", axes = FALSE)

After generating a solution, it is important to evaluate it. Here, we will calculate the number of planning units selected by the solution, and the total cost of the solution. We can also check how many representation targets are met by the solution.

# calculate number of selected planning units by solution
eval_n_summary(p1, s1)
## # A tibble: 1 × 2
##   summary     n
##   <chr>   <dbl>
## 1 overall  2319
# calculate total cost of solution
eval_cost_summary(p1, s1)
## # A tibble: 1 × 2
##   summary  cost
##   <chr>   <dbl>
## 1 overall 8748.
# calculate target coverage for the solution
p1_target_coverage <- eval_target_coverage_summary(p1, s1)
print(p1_target_coverage)
## # A tibble: 396 × 9
##    feature   met   total_amount absolute_target absolute_held absolute_shortfall
##    <chr>     <lgl>        <dbl>           <dbl>         <dbl>              <dbl>
##  1 Recurvir… TRUE         100.             20.0          23.4               0   
##  2 Botaurus… TRUE          99.9            20.0          29.2               0   
##  3 Botaurus… TRUE         100.             20.0          34.0               0   
##  4 Corvus b… TRUE          99.9            20.0          20.2               0   
##  5 Corvus b… FALSE         99.9            20.0          18.7               1.29
##  6 Cinclus … TRUE         100.             20.0          20.4               0   
##  7 Spinus t… TRUE          99.9            20.0          22.4               0   
##  8 Spinus t… TRUE          99.9            20.0          23.0               0   
##  9 Falco sp… TRUE          99.9            20.0          24.5               0   
## 10 Falco sp… TRUE         100.             20.0          24.4               0   
## # ℹ 386 more rows
## # ℹ 3 more variables: relative_target <dbl>, relative_held <dbl>,
## #   relative_shortfall <dbl>
# check percentage of the features that have their target met given the solution
print(mean(p1_target_coverage$met) * 100)
## [1] 96.46465

Although this solution helps meet the representation targets, it does not account for existing protected areas inside the study area. As such, it does not account for the possibility that some features could be partially – or even fully – represented by existing protected areas and, in turn, might fail to identify meaningful priorities for new protected areas. To address this issue, we will use the get_wa_locked_in() function to import spatial data for protected areas in the study area. We will then add constraints to the problem() to ensure they are selected by the solution (via add_locked_in_constraints()).

# import locked in data
wa_locked_in <- get_wa_locked_in()

# print data
print(wa_locked_in)
## class       : SpatRaster 
## dimensions  : 109, 147, 1  (nrow, ncol, nlyr)
## resolution  : 4000, 4000  (x, y)
## extent      : -1816382, -1228382, 247483.5, 683483.5  (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs 
## source      : wa_locked_in.tif 
## name        : protected areas 
## min value   :               0 
## max value   :               1
# plot data
plot(wa_locked_in, main = "Existing protected areas", axes = FALSE)

# create new problem with locked in constraints added to it
p2 <-
  p1 %>%
  add_locked_in_constraints(wa_locked_in)

# solve the problem
s2 <- solve(p2)

# plot the solution
plot(s2, main = "Solution", axes = FALSE)

This solution is an improvement over the previous solution. However, there are some places in the study area that are not available for protected area establishment (e.g., due to land tenure). As a consequence, the solution might not be practical for implementation, because it might select some places that are not available for protection. To address this issue, we will use the get_wa_locked_out() function to import spatial data describing which planning units are not available for protection. We will then add constraints to the problem() to ensure they are not selected by the solution (via add_locked_out_constraints()).

# import locked out data
wa_locked_out <- get_wa_locked_out()

# print data
print(wa_locked_out)
## class       : SpatRaster 
## dimensions  : 109, 147, 1  (nrow, ncol, nlyr)
## resolution  : 4000, 4000  (x, y)
## extent      : -1816382, -1228382, 247483.5, 683483.5  (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs 
## source      : wa_locked_out.tif 
## name        : urban areas 
## min value   :           0 
## max value   :           1
# plot data
plot(wa_locked_out, main = "Areas not available for protection", axes = FALSE)

# create new problem with locked out constraints added to it
p3 <-
  p2 %>%
  add_locked_out_constraints(wa_locked_out)

# solve the problem
s3 <- solve(p3)

# plot the solution
plot(s3, main = "Solution", axes = FALSE)

This solution is even better then the previous solution. However, we are not finished yet. The planning units selected by the solution are fairly fragmented. This can cause issues because fragmentation increases management costs and reduces conservation benefits through edge effects. To address this issue, we can further modify the problem by adding penalties that punish overly fragmented solutions (via add_boundary_penalties()). Here we will use a penalty factor (i.e., boundary length modifier) of 0.003, and an edge factor of 50% so that planning units that occur on the outer edge of the study area are not overly penalized.

# create new problem with boundary penalties added to it
p4 <-
  p3 %>%
  add_boundary_penalties(penalty = 0.003, edge_factor = 0.5)

# solve the problem
s4 <- solve(p4)

# plot the solution
plot(s4, main = "Solution", axes = FALSE)

Now, let’s explore which planning units selected by the solution are most important for cost-effectively meeting the targets. To achieve this, we will calculate importance (irreplaceability) scores using the Ferrier method. Although this method produces scores for each feature separately, we will examine the total scores that summarize overall importance across all features.

# calculate importance scores
rc <-
  p4 %>%
  eval_ferrier_importance(s4)

# print scores
print(rc)
## class       : SpatRaster 
## dimensions  : 109, 147, 397  (nrow, ncol, nlyr)
## resolution  : 4000, 4000  (x, y)
## extent      : -1816382, -1228382, 247483.5, 683483.5  (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=laea +lat_0=45 +lon_0=-100 +x_0=0 +y_0=0 +ellps=sphere +units=m +no_defs 
## source(s)   : memory
## varnames    : wa_pu 
##               wa_pu 
##               wa_pu 
##               ...
## names       :  Recur~ding),  Botau~ding),  Botau~ding),  Corvu~ding),  Corvu~ding),  Cincl~full), ... 
## min values  : 0.0000000000, 0.0000000000, 0.0000000000, 0.000000e+00, 0.000000e+00, 0.000000e+00, ... 
## max values  : 0.0003227724, 0.0002213034, 0.0006622152, 7.771815e-05, 8.974447e-05, 8.483296e-05, ...
# plot the total importance scores
## note that gray cells are not selected by the prioritization
plot(
  rc[["total"]], main = "Importance scores", axes = FALSE,
  breaks = c(0, 1e-10, 0.005, 0.01, 0.025),
  col = c("#e5e5e5", "#fff7ec", "#fc8d59", "#7f0000")
)

This short example demonstrates how the prioritizr R package can be used to build and customize conservation problems, and then solve them to generate solutions. Although we explored just a few different functions for modifying a conservation problem, the package provides many functions for specifying objectives, constraints, penalties, and decision variables, so that you can build and custom-tailor conservation planning problems to suit your planning scenario.

Learning resources

The package website contains information on the prioritizr R package. Here you can find documentation for every function and built-in dataset, and news describing the updates in each package version. It also contains the following articles and tutorials.

Additional resources can also be found in online repositories under the prioritizr organization. These resources include slides for talks and seminars about the package. Additionally, workshop materials are available too (e.g., the Carleton 2023 workshop).

Getting help

If you have any questions about the prioritizr R package or suggestions for improving it, please post an issue on the code repository.

wdpar's People

Contributors

jeffreyhanson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

wdpar's Issues

typology evaluation errors in wdpa_clean

in addition to #19, I came across the following error in the "DNK" (Denmark) subset:

Error in CPL_geos_binop(st_geometry(x), st_geometry(y), op, par, pattern, :
Evaluation error: TopologyException: side location conflict at 1029038.938 6096087.5300000003.

Error in data import related to folder path

Congratulations for the initiative.
I believe it will be of great help to my thesis
I would like to inform you that I am not able to reproduce the example available in the package manual, In the first command line "mlt_raw_pa_data <- wdpa_fetch (" Malta ", wait = TRUE)" the following warning appears:

"Error in wdpa_fetch (" Malta ", wait = TRUE):
   data not found in download_dir, and on the internet connectionto download it. "

I have already tried inserting the current directory into download_dir in different ways, but without success.
I'm also connected to the internet
Thank you

Undefined error in wdpa_fetch()

Hi,

Thanks for the awesome package. I was trying to use the package to download MPA data from France but on running wdpa_fetch() I was greeted with an error. Can you help with that.

library(sf)
library(wdpar)

wdpaid <- "555526224"
mpa <- as_Spatial(wdpa_fetch("France")) %>% filter(WDPAID == wdpaid)

Error shown:
Error in checkError(res) :
Undefined error in httr call. httr output: Failed to connect to localhost port 4567: Connection refused

Can you explain how I can solve this issue.

Fails to download country-level data

The wdpa_fetch() function for downloading country data currently doesn't work due to changes on the Protected Planet website. For example, running the following code throws the following error:

library(wdpar)
lie_raw_data <- wdpa_fetch("Liechtenstein", wait = TRUE, force = TRUE)
> Error:   Summary: NoSuchElement
>         Detail: An element could not be located on the page using the given search parameters.
>         class: org.openqa.selenium.NoSuchElementException
>        Further Details: run errorDetails method

Error: Summary: ElementNotVisible

A couple of people (and many thanks to them for bringing this to my attention) have encountered the following error message when running wdpa_fetch:

Error:  Summary: ElementNotVisible
        Detail: An element command could not be completed because the element is not 
                visible on the page.
        class: org.openqa.selenium.ElementNotVisibleException
        Further Details: run errorDetails method

After some detective work, it appears this error is being thrown by the Rselenium web driver when trying to click on a button (rd$clickElement() in wdpa_url.R) to navigate the http://protectedplanet.net website.

HTTP error 404 in "global" query with wdpa_fetch

Hi @jeffreyhanson! Thanks a lot for the useful package and for taking the time to address all of the open issues. I have gone through all of them and I haven't found the error I am getting, so here I go:
When trying to retrieve the whole dataset through: global_data <- wdpa_fetch("global",wait = FALSE) I get the following error:
Downloaded 303 bytes...Error in curl::curl_download(download_url, file_path, quiet = !verbose) :
HTTP error 404.

I have tried id from different R versions and servers and I always get the same error. Any clue what might be causing this?

Thanks in advance!

JOSS Review: Improve Statement of need / description of use-cases

It seems to me, that the statement of need in the paper and/or the introduction part in the documentation could be a little clearer.

My understanding is this: The wdpar package can help users of the wdpa and wdoecm to work more effectively with the data, in a broad sense by allowing user automated download and preprocessing according to scientific recommendations. You talk a lot about "data cleaning procedures" which are difficult to implement, but the link to the final use-cases is still a bit blurry from my perspective (you do though describe them a bit in the "research application" part.)

For me, the question arises whether wdpar is "only" intended to serve as a planning and reporting tool (i.e. display and quantification of protected area coverage & reducing over-estimations).

-> I guess this is/was the main concern when you developed the package...

or whether wdpar could also serve other "final" purposes, such as monitoring and evaluation of protected areas' effectiveness.

-> this is probably what others have turned it into by using it for their own types of analysis

Is this interpretation correct? If so, then it could make sense to inform users about that because the wdpa_clean function with its default settings is really focussed on area estimations.

Maybe you could write something along the line of: the primary intent of the package is to calculate protected area coverage to e.g. report on the Aichi policy progress, nevertheless, part of the downloading and preprocessing routines can also be helpful in other contexts such as spatial analysis of protected area effectiveness. If the final use-case is different from official PA coverage reporting, then users should be aware of the settings in the wdpa_clean function and fine-tune them to their needs. For monitoring, it might make sense e.g. to "retain a specific PA status", to "not exclude Unesco sites" or to "increase the geometrical precision" of the data-cleaning functions to have better area estimations at local level.

Link to the review threat.

How to get and clean a global PA list?

First, thank you for making this package available.

I am trying to download and clean a global PA data set.
Following the fetch_wdpa documentation I arrive at 22 M km2 terrestrial PA, whereas I expected (through the protectedplanet summaries) to get 20.2 M km2. I think this is because I cleaned data and have not solved overlaps through a sf operation yet.

I have not been able to do this because wdpa_clean cannot allocate enough memory.

Is there a way to loop through the database continent-wise, or otherwise just get a cleaned up dataset?

JOSS Review: Add statement about state of the field

Last issue from my side. JOSS Review criteria ask if the author describes the "State of the field: Do the authors describe how this software compares to other commonly-used packages?"

I guess that the wdpar package is pretty unique in the sense that there is no other R (or Python) package AFAIK that do exactly the same or something very similar to what you do. So it could be informative to the reader to write a short sentence in your manuscript that tells them that probably this is the only piece of software to deal with this very specific issue. I always enjoy if an R-package also lists alternatives s that could serve me and how they are different... so knowing that basically at the moment there is no alternative to the wdpar package could be a shortcut for some readers to not investigate too much into finding other alternatives (I guess there is also no other software outside the Python / R world to do so.) + it increases the value of your package to the reader in the sense that it is unique in what it does.

Nevertheless, there is e.g. the Digital Observatory for Protected Areas (DOPA) which provides country summary stats on protected area progress towards AICHI. You could cite them because probably they have to apply exactly the same methods as you do and do allow you to create summary stats to a limited degree.

request for "wdpa_merge()"

Hi @jeffreyhanson, while I am about to finish cleaning all WDPA country subsets, I tried putting them all together, in order to finally get a "flattened WDPA layer". However, both with st_combine or st_union and with rbind and a second wdpa_clean it seems not to be working properly. I always get some areas overlapping and thus double counting in any area calculations.
Therefore, as you know the details of your package: Could you put together a function wdpa_merge that puts together subsets of the WDPA and cleans the overlaps between countries? (in theory this should be transboundary protected areas only)
That would be great ...

verify point buffer calculations

rep_area = 5 # units in km^2
pt <- st_sfc(st_point(rbind(c(0,0))), crs = 3395)
pl <- st_buffer(pt, sqrt((rep_area * 1000000) / pi))
area <- units::set_units(sf::st_area(pl), km^2)
print(area)  # should be close to 5

wdpa_clean error

Thanks for this awesome package Jeff!

Downloading the global dataset works fine.

wdpa_raw <- wdpa_fetch("global", wait = TRUE, 
                       download_dir = here("WDPA/"))

When I run the following with a crs setting of proj4string(base_raster) =
"+proj=eck4 +lon_0=0 +x_0=0 +y_0=0 +ellps=WGS84 +units=m +no_defs"

wdpa <- wdpa_clean(wdpa_raw, crs = proj4string(base_raster))

I'm getting this error message:

>----------------------] 19778/228652 (  9% eta:) eta:  2d
[=>----------------------] 19779/228652 (  9% eta:) eta:  2dError in CPL_geos_op2(op, st_geometry(x), st_geometry(y)) : 
  Evaluation error: TopologyException: Input geom 1 is invalid: Too few points in geometry component at or near point -9268899.2860000003 6380069 at -9268899.2860000003 6380069.

I will rerun without setting crs now, but wanted to bring this to your attention.
Thanks,
Richard

buffering points not working

Hi there and congrats on this new tool!
Checking out the data resulting from applying wdpa_clean function, I find that the areas represented as points in the WDPA data base are not being converted into polygons. The example you provide in the vignette works fine, but when I try with other countries it seems to be not working. Here a picture of the protected areas in Bolivia before and after applying the wdpa_clean function. PAs represented as points are in black. I also found that some of the polygon in the raw data are missing after the clean process (see the example shaded in the picture).

Any clue of what could be happening?
Thanks!
Nico

imagen

My code:
bol_raw_pa_data <- wdpa_fetch("Bolivia")
bol_pa_data <- wdpa_clean(bol_raw_pa_data)
bol_pa_data<-st_transform(bol_pa_data,"+proj=longlat +datum=WGS84 +no_defs") #I changed the crs for plotting purpose only
bg <- get_stamenmap(unname(st_bbox(bol_pa_data)), zoom = 4, maptype = "watercolor", force = TRUE)

rawplot<-ggmap(bg) +
geom_sf(data = bol_raw_pa_data, fill = "#31A35480", inherit.aes = FALSE) +
theme(axis.title = element_blank())+
ggtitle("RawData")+
geom_sf(data = bol_raw_pa_data[bol_raw_pa_data$WDPAID=="98183",], fill = "black", inherit.aes = FALSE)

cleanplot<-ggmap(bg) +
geom_sf(data = bol_pa_data, fill = "#31A35480", inherit.aes = FALSE) +
theme(axis.title = element_blank())+
ggtitle("CleanData")

multiplot(rawplot, cleanplot,cols = 2)

wdpa_fetch

Hi! I'm having problems to run this function, this is my code:

library(wdpar)
library(prepr)
library(dplyr)
library(tibble)

define country names to download

country_codes <- c("CRI", "NIC", "HND", "SLV","GTM", "BLZ", "MEX")

download data for each country

mult_data <- lapply(country_codes, wdpa_fetch, wait = TRUE)

Even if I close the session and opened it again it keeps showing this error:

mult_data <- lapply(country_codes, wdpa_fetch, wait = TRUE)
[100%] Downloaded 15368060 bytes...

Error in wdman::phantomjs(verbose = FALSE) :
PhantomJS signals port = 4567 is already in use.

I don't know how to solve it, thanks for reading.

No internet connection with R Studio to download wdpa data?

Dear Jeff,
I was excited to try out your wdpar package and downloaded it yesterday. When I tried to let your example run I received the following error:

mlt_raw_pa_data <- wdpa_fetch("MLT", wait = TRUE)

Error in wdpa_fetch("MLT", wait = TRUE) :
data not found in download_dir, and no internet connectionto download it.

As far as I understood your description, WDPA data should be downloaded by your package automatically, so no need for me to manually put something in the download-dir, right? I do not understand why I get the "no internet connection" error as I am connected to the internet and downloading packages from CRAN works just fine. I use R Studio Version 1.1.463 and R Version 3.5.2.
Maybe you have an idea why it is not working for me?

Anyway, thanks for providing this cool package!
Cheers,
Anke

upcoming sf breaks wdpar

Hi Jeffrey, this happens here:

Package: wdpar
Check: tests
New result: ERROR
    Running ‘testthat.R’ [2s/2s]
  Running the tests in ‘tests/testthat.R’ failed.
  Complete output:
    > # load packages
    > library(testthat)
    > library(wdpar)
    Loading required package: sf
    Linking to GEOS 3.10.3, GDAL 3.5.0, PROJ 9.0.1; sf_use_s2() is TRUE
    > 
    > # enable parallel testing
    > Sys.unsetenv("R_TESTS")
    > 
    > # run tests
    > test_check("wdpar")
    [ FAIL 1 | WARN 0 | SKIP 27 | PASS 19 ]
    
    ══ Skipped tests ═══════════════════════════════════════════════════════════════
    • On CRAN (27)
    
    ══ Failed tests ════════════════════════════════════════════════════════════════
    ── Failure (test_wdpa_dissolve.R:20:3): works ──────────────────────────────────
    `y` not equal to `y2`.
    Component "geometry": Component 1: Component 1: Mean relative difference: 0.9538462
    
    [ FAIL 1 | WARN 0 | SKIP 27 | PASS 19 ]
    Error: Test failures
    Execution halted

It seems the polygon is identical, but ordered differently; this may be caused by an update in GEOS. You could use st_equals() as an alternative to check for geometrical equality.

Error when retrieving global WDPA dataset

when running the latest dev version of the package, I get an error when trying to download the whole WDPA using the following code:

wdpa_global <- wdpa_fetch("global", wait = TRUE, download_dir = "~/Science/Data/GIS/WDPA - World", force_download = TRUE, verbose = TRUE)

Cannot open layer WDPA_poly_Dec2020
Error in CPL_read_ogr(dsn, layer, query, as.character(options), quiet, :
Opening layer failed.

Curiously, however, the file appears to download (in full) into the designated directory (WDPA_Dec2020_Public.gdb.zip, ~1.35 GB) - so it's not a major problem, but figured worth bringing up.

Failed to connect to localhost

Thanks for such a great contribution. This will make using WDPA much more easier. (I love wdpa_clean()!)

I am trying to replicate the process on the README, and am getting some errors:

> library(wdpar)
Loading required package: sf
Linking to GEOS 3.6.1, GDAL 2.2.3, PROJ 4.9.3
> mlt_raw_pa_data <- wdpa_fetch("Malta", wait = TRUE)
Error in checkError(res) : 
  Undefined error in httr call. httr output: Failed to connect to localhost port 4567: Connection refused

Looking at the code, it seems like the problem may be in wdpa_url()? Line 48 calls
rd <- RSelenium::remoteDriver(port = 4567L, browserName = "phantomjs"). If I execute that line and then try rd$open(silent = TRUE) I get the same error message as above.

Running pingr::is_online() returns TRUE.


This is my session info

> sessionInfo()
R version 3.5.2 (2018-12-20)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17134)

Matrix products: default

locale:
[1] LC_COLLATE=Spanish_Mexico.1252 
[2] LC_CTYPE=Spanish_Mexico.1252   
[3] LC_MONETARY=Spanish_Mexico.1252
[4] LC_NUMERIC=C                   
[5] LC_TIME=Spanish_Mexico.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets 
[6] methods   base     

other attached packages:
[1] wdpar_0.0.1 sf_0.7-2   

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.0        magrittr_1.5     
 [3] units_0.6-2       rappdirs_0.3.1   
 [5] RSelenium_1.7.5   R6_2.3.0         
 [7] httr_1.4.0        caTools_1.17.1.1 
 [9] tools_3.5.2       grid_3.5.2       
[11] packrat_0.4.9-2   binman_0.1.1     
[13] e1071_1.7-0       DBI_1.0.0        
[15] semver_0.2.0      subprocess_0.8.3 
[17] class_7.3-14      openssl_1.1      
[19] yaml_2.2.0        assertthat_0.2.0 
[21] countrycode_1.1.0 bitops_1.0-6     
[23] curl_3.3          wdman_0.2.4      
[25] compiler_3.5.2    pingr_1.1.2      
[27] classInt_0.3-1    XML_3.98-1.16    
[29] jsonlite_1.6     

Port error

Hi,

While running a loop to download PAs for multiple countries the code fails with a port-in-us error:

Reprex:
library(wdpar)

make a list of countries

countries <- c("Vietnam","Malaysia","Laos")

loop over countries to get all shapefile data

for(i in countries){
mlt_raw_pa_data <- ?wdpa_fetch(
i, wait = TRUE, download_dir = getwd())
}

After the first file downloads, the function stops with:
Error in wdman::phantomjs(verbose = FALSE) :
PhantomJS signals port = 4567 is already in use.

I believe this is an issue with Phantom JS or Node JS, I've seen similar instances of port issues like this with rselenium.

sessionInfo()
R version 4.0.3 (2020-10-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19043)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] mapview_2.10.4 rgdal_1.5-21 forcats_0.5.1 stringr_1.4.0 dplyr_1.0.7
[6] purrr_0.3.4 readr_2.1.0 tidyr_1.1.4 tibble_3.1.6 ggplot2_3.3.5.9000
[11] tidyverse_1.3.1 sp_1.4-6 wdpar_1.3.2 sf_1.0-5

loaded via a namespace (and not attached):
[1] bitops_1.0-7 fs_1.5.0 satellite_1.0.4 lubridate_1.8.0 webshot_0.5.2
[6] httr_1.4.2 tools_4.0.3 backports_1.3.0 utf8_1.2.2 R6_2.5.1
[11] KernSmooth_2.23-20 DBI_1.1.1 colorspace_2.0-2 raster_3.4-8 withr_2.4.2
[16] tidyselect_1.1.1 processx_3.5.2 leaflet_2.0.4.1 curl_4.3.2 compiler_4.0.3
[21] leafem_0.1.6 cli_3.1.0 rvest_1.0.2 xml2_1.3.2 caTools_1.18.2
[26] scales_1.1.1 classInt_0.4-3 askpass_1.1 proxy_0.4-26 rappdirs_0.3.3
[31] digest_0.6.28 wdman_0.2.5 base64enc_0.1-3 pkgconfig_2.0.3 htmltools_0.5.2
[36] dbplyr_2.1.1 fastmap_1.1.0 htmlwidgets_1.5.4 rlang_0.4.12 readxl_1.3.1
[41] rstudioapi_0.13 generics_0.1.1 jsonlite_1.7.2 crosstalk_1.2.0 magrittr_2.0.1
[46] Rcpp_1.0.7.2 munsell_0.5.0 fansi_0.5.0 lifecycle_1.0.1 terra_1.4-22
[51] stringi_1.7.5 yaml_2.2.1 grid_4.0.3 crayon_1.4.2 semver_0.2.0
[56] lattice_0.20-41 haven_2.4.3 hms_1.1.1 ps_1.6.0 pillar_1.6.4
[61] codetools_0.2-18 stats4_4.0.3 reprex_2.0.1 XML_3.99-0.8 glue_1.5.0
[66] packrat_0.7.0 modelr_0.1.8 png_0.1-7 vctrs_0.3.8 tzdb_0.2.0
[71] cellranger_1.1.0 gtable_0.3.0 openssl_1.4.5 assertthat_0.2.1 binman_0.1.2
[76] broom_0.7.10 countrycode_1.3.0 e1071_1.7-9 class_7.3-17 RSelenium_1.7.7
[81] units_0.7-2 ellipsis_0.3.2

JOSS Review: Improve documentation on geo-processing steps and its effects on the original geometries

Hi @jeffreyhanson .

I think it could be helpful for users to extend a little the documentation on the internal geo-processing steps and what they actually do to the data (geometries). This should be part of the vignette, but also of the paper (which so far very much focuses very much on describing the need, but not the internals of the package).

From my first test, there are a few things that should be discussed:

  • Overview of the applied geo-processing steps in wdpa_clean. -> maybe this could be also a graphical representation.
  • which geo-processing steps from wdpa_clean affect the original geometries?
  • How do they affect the geometries and what are consequences for the use (e.g. question of scale)?
  • How do default settings affect geometries (discuss that default is country scale analysis)?
  • How can we fine-tune settings e.g. to work on different scales and what are the effects on the geometries?
  • How are overlapping polygon boundaries resolved and what are the consequences for analysis -> e.g. if there are two overlapping areas with different IUCN categories, how is the boundary resolved and what are the consequences for calculating area statistics.

If possible, I would recommend that you use interactive maps in your vignette instead of the map that you plotted (which does not allow to really see anything). Below you can find some sample code and some screenshots that might allow users to understand more intuitively what is done with the data. You could use and maybe extent this a little bit.

# ----- reproduce quickstart tutorial from https://github.com/prioritizr/wdpar -----
# load packages
library(wdpar)
library(dplyr)
library(ggmap)

# download protected area data for Malta
mlt_raw_pa_data <- wdpa_fetch("Malta",
                              wait = TRUE,
                              download_dir = rappdirs::user_data_dir("wdpar"))

# clean Malta data
mlt_pa_data <- wdpa_clean(mlt_raw_pa_data)

# reproject data to longitude/latitude for plotting
mlt_pa_data <- st_transform(mlt_pa_data, 4326)

# download basemap imagery
bg <- get_stamenmap(unname(st_bbox(mlt_pa_data)),
                    zoom = 8,
                    maptype = "watercolor",
                    force = TRUE)

# make map
ggmap(bg) +
  geom_sf(aes(fill = IUCN_CAT), data = mlt_pa_data, inherit.aes = FALSE) +
  theme(axis.title = element_blank(), legend.position = "bottom")

# ----- interactive map to compare raw to processed data -----
library(mapview)

mapView(mlt_pa_data) + mapView(mlt_raw_pa_data, col.regions = "red")

# ----- compare higher precision processing -----
# problem: Data with the default setting gets quite distorted on local level
# (see mapview before)

# clean Malta data with higher geomtry precision
mlt_pa_data_highprecision <- wdpa_clean(mlt_raw_pa_data,
                                        geometry_precision = 10000)


mapView(mlt_pa_data) +
  mapView(mlt_raw_pa_data, col.regions = "red") +
  mapView(mlt_pa_data_highprecision, col.regions = "green")


# clean Malta data with higher geomtry precision and keep overlaps
mlt_pa_data_highprecision_overlap <- wdpa_clean(mlt_raw_pa_data,
                                                geometry_precision = 10000,
                                                erase_overlaps = FALSE)


mapView(mlt_pa_data) +
  mapView(mlt_raw_pa_data, col.regions = "red") +
  mapView(mlt_pa_data_highprecision, col.regions = "green")+
  mapView(mlt_pa_data_highprecision_overlap, col.regions = "purple")

Original Data
image

Default Settings

image

Higher Geometry precision
image

HIgher geometry precision and overlaps
image

Link to review

Dissolve all MPAs in each country

Thanks a lot for this package, unfortunately I haven't enabled to use it effectively. I'm trying to perform a global analysis and I would need to dissolve all PAs and MPAs for each country. This would mean one polygon for each country, so I can perform spatial analysis by country on a global scale.

Unfortunately when I work with the global data set and follow the provided instructions I get errors about vertices and R freezes. Is there a dissolved global shapefile by country or a way to produce it with this package?

Request to download individual PAs

Hi. Thanks for this helpful package. I'd like to know if there ist the possibility to download individual Protected Areas given the WDPA id or if this query could by implemented. We work with a list of PAs from different countries and downloading all data is lengthy and sometimes unnecessary, especially in automated tasks. Would be nice to hear back from you.

st_make_valid no longer exported by lwgeom, but by sf

Please update your package and resubmit to CRAN, to avoid CRAN errors on your package.

New result: ERROR
  Running examples in ‘wdpar-Ex.R’ failed
  The error most likely occurred in:
  
  > base::assign(".ptime", proc.time(), pos = "CheckExEnv")
  > ### Name: st_erase_overlaps
  > ### Title: Erase overlaps
  > ### Aliases: st_erase_overlaps
  > 
  > ### ** Examples
  > 
  > # create data
  > pl1 <- sf::st_polygon(list(matrix(c(0, 0, 2, 0, 1, 1, 0, 0), byrow = TRUE,
  +                                   ncol = 2))) * 100
  > pl2 <- sf::st_polygon(list(matrix(c(0, 0.5, 2, 0.5, 1, 1.5, 0, 0.5),
  +                                   byrow = TRUE, ncol = 2))) * 100
  > pl3 <- sf::st_polygon(list(matrix(c(0, 1.25, 2, 1.25, 1, 2.5, 0, 1.25),
  +                                   byrow = TRUE, ncol = 2))) * 100
  > x <- sf::st_sf(order = c("A", "B", "C"),
  +                geometry = sf::st_sfc(list(pl1, pl2, pl3), crs = 3395))
  > 
  > # erase overlaps
  > y <- st_erase_overlaps(x)
  Error: 'st_make_valid' is not an exported object from 'namespace:lwgeom'
  Execution halted

Error in downloading global wdpa data

Hi, I'm not sure if this error should be reported here or to the folks at WDPA(?), but when I've tried downloading the global data using wdpa_fetch('global'), I get the following error:
Error in utils::unzip(x, exdir = tdir) : cannot open file '/var/folders/j0/s0q31lh965q7dmf_cpt7bg040000gn/T//RtmpeGEG0F/file2f86347d96e/WDPA_Sep2019_Public/Recursos_en_Espanol/Ap�ndice 5_Metadatos.pdf': Illegal byte sequence
I can download the global file from the protectedplanet website, so accessing the file isn't a big problem, I just wanted to flag this error.
Thanks,
Jocelyne

wdpa_clean doesnt work offline

Hi, another issue I came across while working on the train: wdpa_clean doesnt work offline:

Error: curl::has_internet() is not TRUE

I guess this is due to the wdpa_url and wdpa_fetch functions that need internet connection, but wdpa_clean shouldnt need to, right?

Coastlines

The vignette + wdpa_clean documentation need to mention that the data should be clipped to coastlines and this is not done as part of the data cleaning.

wdpa_clean error

Hi, I've just tried using wdpa_clean, it managed to remove areas that are not implemented, remove UNESCO reserves and remove points with no reported area, but then throws this error:
Error: package ‘lwgeom’ does not have a namespace

Cleaning Brazil areas issue

I'm trying to download and clean the wdpa data for Brazil, and have run into an error.

I start by downloading the data, which works, but I get an error just about halfway through cleaning it.

brazil = wdpa_fetch("BRA") %>%
  wdpa_clean()

Here's the result, note the error on the bottom:

removing areas that are not implemented: v
removing UNESCO reserves: v
removing points with no reported area: v
repairing geometry: v
wrapping dateline: v
repairing geometry: v
projecting areas: v
repairing geometry: v
buffering by zero: v
buffering points: v
repairing geometry: v
snapping geometry to grid: v
repairing geometry: v
formatting attribute data: v
erasing overlaps: ~
[============>--------------] 1012/2062 ( 49% eta:) eta:  2mError in CPL_geos_union(st_geometry(x), by_feature) : 
  Evaluation error: TopologyException: Input geom 0 is invalid: Self-intersection at or near point -4572218.671977994 -3014698.965008833 at -4572218.671977994 -3014698.965008833.

I get a similar issue with Mexico, as well.

Any help with this would be greatly appreciated!

Thanks!

JOSS Review: Add small POC about Performance claims

JOSS Review criteria from the thread ask to assess whether there are any performance claims of the package that can be assessed. In your quick-start vignette, you state that:

"The wdpar R package can be used to clean large datasets assuming that sufficient computational resources and time are available. Indeed, it can clean data spanning large countries, multiple countries, and even the full global datatset (sic). When processing the full global dataset, it is recommended to use a computer system with at least 32 GB RAM available and to allow for at least one full day for the data cleaning procedures to complete"

Is it possible for you to provide a very short example that sustains that claim? I thought maybe you could randomly sample e.g. 500 areas from the global dataset and process them recording starttime and stoptime and report back. This would help to extrapolate and sustain the claim. I can also try to create a small example, but I never worked on the global data, so I guess it might be easier for you to do so.

I would not care too much about the efficiency of a geospatial software if it was designed for local scale analysis but the beauty of the wdpar package is, at least in theory, that one could create summary stats on the whole global progress towards achieving the AICHI targets (or the post-AICHI targets if e.g. the 30 by 30 goal is going to be approved by the global conservation community, - btw. something you could also mention in your use case description). So in that specific case, I would really like to see whether one could use the package for that.

Thanks for the package!

Hi, thank you very much for the package, very exciting!
I am indeed trying to dissolve the whole WDPA into a "flat layer of global protected areas" (terrestrial and coastal only, but thats not a problem).

I didnt get to install wdpar yet, due to some dependencie issues. I will try again on monday. Until then: Is it possible with your package to dissolve (get rid of overlapping areas) the whole database? How do you deal with processing performance capacities?
Would be great to have some discussion about this. I am also happy to provide my code, in case it helps developing the package.

Best regards,
Jonas

Poor internet connection breaks wdpa_fetch

I am using version 1.3.1.3.

When I called aus_mpa <- wdpa_fetch("Australia", wait = TRUE), I got the following error:

Error: 	 Summary: NoSuchElement
 	 Detail: An element could not be located on the page using the given search parameters.
 	 class: org.openqa.selenium.NoSuchElementException
	 Further Details: run errorDetails method

I am aware of #35, so I made sure to use a recent github version of wdpar. The issue went away when I switched to a faster, non-mobile, connection.

Reading around, it seems like PhantomJS doesn't have a good way to know if the page has loaded yet, would it make sense to parameterize the sleep timer in wdpa_url instead?

wdpa_fetch() no longer works

The PhantomJS web driver is no longer supported by the RSelenium package, and so this function no longers works. I need to find another package to replace the Rselenium dependency.

class error in wdpa_clean

After trying around with the sp package for quite a bit, I will now try to flatten the WDPA with wdpar. The first error I came across, however, is as follows:

Error in st_cast_sfc_default(x) : list item(s) not of class sfg

It came in the WDPA subset for "SOM" (Somalia). Would be great if you can have a look at it.
Thanks and best regards,
Jonas

The package not run for previous download data, and do not start a new download data

I've been trying to run the tutorial example, but the error happens since the first step to download data for mapping the protected areas.

mlt_raw_pa_data <- wdpa_fetch("Malta", wait = TRUE, download_dir = rappdirs::user_data_dir("wdpar")) Error in checkError(res) : Undefined error in httr call. httr output: Failed to connect to localhost port 4567: Connection refused lie_raw_data <- wdpa_fetch("Liechtenstein", wait = TRUE) Error in checkError(res) : Undefined error in httr call. httr output: Failed to connect to localhost port 4567: Connection refused

And even if I try to run the next steps with a data downloaded diretcly from the website of Protected Planet, it is not works.

shp_data <- st_transform(shp, 4326) Error in UseMethod("st_transform") : no applicable method for 'st_transform' applied to an object of class "c('SpatialPolygonsDataFrame', 'SpatialPolygons', 'Spatial', 'SpatialVector')"

wdpa_clean.R

Hello,

working with wdpa_clean.R. Regarding the following step:

## return empty dataset if no valid non-empty geometries remain

I'm wondering if line 311:

if (all(sf::st_is_empty(x))) {

should be:

if (any(sf::st_is_empty(x))) {

so that the error exists if any geometry is empty. As for now it seems that all geometries need to be empty. Am I missing something ?

Cheers.

Error: Summary: NoSuchElement

Hi,
The package looks very interesting and tried to use it today for the first time. Unfortunately, I ran into a problem when I tried to fetch data.

After getting the error message (see below), I installed the most recent version of wdpar from github, and also updated all other packages, but I still get the same erro.

I'm running RStudio on MacOS 10.14.4.

It would be great if I could use the package on my computer.

Thanks a lot,
Urs

# load packages

library(wdpar)
library(dplyr)

# download protected area data for Malta
mlt_raw_pa_data <- wdpa_fetch("Malta", wait = TRUE)
#> Error:    Summary: NoSuchElement
#>       Detail: An element could not be located on the page using the given search parameters.
#>       class: org.openqa.selenium.NoSuchElementException
#>   Further Details: run errorDetails method

Created on 2019-04-08 by the reprex package (v0.2.1)

wdpa_fetch error

Hi @jeffreyhanson

I'm just starting to play with this package and I'm getting this error

Error: x must be a vector, not a sfc_MULTIPOINT/sfc object

Not really sure what I'm doing wrong, I just copied the example mlt_raw_pa_data <- wdpa_fetch("Malta", wait = TRUE)

Do you have any hint of what this is about?

Thanks in advance!
Isaac

GEOS version sensitivity

00check.log
testthat.Rout.fail.log

for GEOS 3.11.0 show a failure probably because of fixes in GEOS to the NG topology engine re-ordering the coordinates of the polygon returned by sf::st_union(x)):

> st_coordinates(st_geometry(y))
        X   Y L1 L2
 [1,]  50  50  1  1
 [2,]   0  50  1  1
 [3,]  75 125  1  1
 [4,]   0 125  1  1
 [5,] 100 250  1  1
 [6,] 200 125  1  1
 [7,] 125 125  1  1
 [8,] 200  50  1  1
 [9,] 150  50  1  1
[10,] 200   0  1  1
[11,]   0   0  1  1
[12,]  50  50  1  1
> st_coordinates(st_geometry(y2))
        X   Y L1 L2
 [1,] 200   0  1  1
 [2,]   0   0  1  1
 [3,]  50  50  1  1
 [4,]   0  50  1  1
 [5,]  75 125  1  1
 [6,]   0 125  1  1
 [7,] 100 250  1  1
 [8,] 200 125  1  1
 [9,] 125 125  1  1
[10,] 200  50  1  1
[11,] 150  50  1  1
[12,] 200   0  1  1

Please adapt equality test - perhaps drop or compare areas.

Feature Request: Add functionality to keep UNESCO sites and not yet implemented areas

Hi @jeffreyhanson . I understand the logic for excluding the UNESCO sites and not yet implemented areas from a country reporting perspective. However, for the assessment of protected areas for e.g. planning purposes, it would be still nice to keep them. Could you provide a TRUE/FALSE parameter to keep these areas if the user wishes (similar to what is also possible with keeping the overlapping polygons).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.