First, pick an outlet location and download some data.
# Uncomment to install! # install.packages("nhdplusTools") library(nhdplusTools) library(sf) start_point <- st_sfc(st_point(c(-89.362239, 43.090266)), crs = 4269) start_comid <- discover_nhdplus_id(start_point) flowline <- navigate_nldi(list(featureSource = "comid", featureID = start_comid), mode = "upstreamTributaries", data_source = "") subset_gpkg <-subset_nhdplus(comids = flowline$nhdplus_comid, output_file = tempfile(fileext = ".gpkg"), nhdplus_data = "download") flowline <- sf::read_sf(subset_gpkg, "NHDFlowline_Network") catchment <- sf::read_sf(subset_gpkg, "CatchmentSP") waterbody <- sf::read_sf(subset_gpkg, "NHDWaterbody") plot(sf::st_geometry(flowline), col = "blue") plot(start_point, cex = 1.5, lwd = 2, col = "red", add = TRUE) plot(sf::st_geometry(catchment), add = TRUE) plot(sf::st_geometry(waterbody), col = rgb(0, 0, 1, alpha = 0.5), add = TRUE)
Read on to see how NHDPlusTools will help you index data to the network you just retrieved and refactor (split, collapse, and aggregate) the catchments into a different set of catchments. Please consider registering issues and feature suggestions on github.
nhdplusTools package is intended to provide a reusable set of tools to
subset, relate data to, and refactor (collapse, split, and aggregate) NHDPlus data.
It implements a data model consistent with both the NHDPlus
and HY_Features. The package
aims to provide a set of tools with minimal dependencies that can be used
to build workflows using NHDPlus data.
The package has three types of functionality:
This introduction gives an overview of the basic package setup and an brief demonstration of the three types of functionality. Detailed documentation of all the package functions can be found at the Referece page
The easiest way to install
nhdplusTools is with the
devtools package like this:
# install.packages("devtools") # devtools::install_github("usgs-r/nhdplusTools")
Then you can load up nhdplusTools:
The first thing you are going to need to do is go get some data to work with.
nhdplusTools provides the ability to download small subsets of the NHDPlus as described in the Discovery and Subsetting section. For large subsets, greater than a few thousand square kilometers, you can download the National Seamless database at this web page. You will need 7z or the
archive package to extract it.
If you are working with the whole National Seamless database,
nhdplusTools has some convenience functions you should be aware of. Once you have it downloaded and extracted, you can tell the nhdplusTools package where it is with the
nhdplus_path(file.path(temp_dir, "natseamless.gpkg")) nhdplus_path()
If you are going to be loading and reloading the flowlines, flowline attributes, or catchments, repeatadly, the
stage_national_data() function will speed things up a bit. It creates three staged files that are quicker for R to read at the path you tell it. If you call it and its output files exist, it won't overwrite and just return the paths to your staged files.
staged_data <- stage_national_data(output_path = tempdir()) str(staged_data)
As you can see,
stage_national_data() assumes you want to stage data in the same folder as the nhdplus_path database and returns a list of .rds files that can be read with readRDS. The flowlines and catchments are
data.frames and attributes is a plain
data.frame with the attributes from
flowline. Note that this introduction uses a small subset of the national seamless database as shown in the plot.
flowline <- readRDS(staged_data$flowline) names(flowline)[1:10] library(sf) plot(sf::st_geometry(flowline))
(6/16/2019) NHDPlus HiRes is an in-development dataset that introduces much more dense flowlines and catchments. In the long run,
nhdplusTools will have complete support for NHDPlus HiRes. So far,
nhdplusTools will help download and interface NHDPlus HiRes data with existing
nhdplusTools functionality. It's important to note that
nhdplusTools was primarily implemented using NHDPlusV2 and any use of HiRes (which is still “beta data” as of writing this) should be subject to significant scruitiny. Never the less, here's a short summary of how to work with NHDPlus HiRes.
For the demo below, a small sample of HiRes data that has been loaded into
nhdplusTools is used. The first line shows how you can download additional data (just change
download_nhdplushr(nhd_dir = "download_dir", hu_list = c("0101"), # can mix hu02 and hu04 codes. download_files = FALSE) # TRUE will download files. hr_data <- get_nhdplushr(nhd_dir, out_gpkg = file.path(nhd_dir, "nhd_hr.gpkg")) (layers <- st_layers(hr_data)) unlink(hr_data) hr_data <- get_nhdplushr(nhd_dir, out_gpkg = file.path(nhd_dir, "nhd_hr.gpkg"), layers = NULL) (layers <- st_layers(hr_data))
Other functionality in the package, such as the
get_UT/UM/DM/DD functions, subsetting, indexing, etc. also work now or will soon! Stay tuned for a dedicated NHDPlus HiRes vignette and submit issues as you find them!
One of the primary workflows
nhdplusTools is designed to accomplish can be described in three steps:
Say we want to get a subset of the NHDPlus upstream of a given location. We can start with
discover_nhdplus_id() First, let's look at a given point location. Then see where it is relative to our flowlines.
lon <- -89.362239 lat <- 43.090266 start_point <- sf::st_sfc(sf::st_point(c(lon, lat)), crs = 4269) plot(sf::st_geometry(flowline)) plot(start_point, cex = 1.5, lwd = 2, col = "red", add = TRUE)
OK, so we have a point location near a river and we want to figure out what catchment is at its outlet. We can use the
discover_nhdplus_id() function which calls out to a web service and returns an NHDPlus catchment identifier, typically called a COMID.
start_comid <- discover_nhdplus_id(start_point) start_comid
If you have the whole National Seamless database and want to work at regional to national scales, skip down the the Local Data Subsetting section.
nhdplusTools supports discovery and data subsetting using web services made available through the Network Linked Data Index (NLDI) and the National Water Census Geoserver. The code below shows how to use the NLDI functions to build a dataset upstream of our
start_comid that we found above.
The NLDI can be queried with any set of watershed outlet locations that it has in its index. We call these “featureSources”. We can query the NLDI for an identifier of a given feature from any of its “featureSources” and find out what our navigation options are as shown below.
discover_nldi_sources()$source nldi_feature <- list(featureSource = "comid", featureID = start_comid) discover_nldi_navigation(nldi_feature)
discover_nldi_navigation function is a handy way to make sure the featureID is available for the chosen “featureSource” as well as find valid navigation modes to be used with
navigate_nldi. Now that we know the NLDI has our comid, we can use the “upstreamTributaries” navigation option to get all the flowlines upstream or all the features from any of the “featureSources” as shown below.
flowline_nldi <- navigate_nldi(nldi_feature, mode = "upstreamTributaries", data_source = "") plot(sf::st_geometry(flowline), lwd = 3, col = "black") plot(sf::st_geometry(flowline_nldi), lwd = 1, col = "red", add = TRUE)
What is not shown here is that the NLDI only provided geometry and a comid for each of the flowlines. The
subset_nhdplus function has a “download” option that allows us to download four layers and all attributes as shown below.
output_file_download <- file.path(temp_dir, "subset_download.gpkg") output_file_download <-subset_nhdplus(comids = flowline_nldi$nhdplus_comid, output_file = output_file_download, nhdplus_data = "download") sf::st_layers(output_file_download) flowline_download <- sf::read_sf(file.path(temp_dir, "subset_download.gpkg"), "NHDFlowline_Network") plot(sf::st_geometry(dplyr::filter(flowline_download, streamorde > 2)), lwd = 7, col = "darkgrey") plot(sf::st_geometry(flowline_nldi), lwd = 3, col = "red", add = TRUE)
This plot illustrates the kind of thing that's possible (filtering to specific stream orders) using the attributes that are downloaded.
Notice that the data downloaded above only has four layers where the subset we build below has more. This functionality should be considered beta in nature, but may be useful for some applications so has been included.
Before moving on, one more demonstration of what can be done using the NLDI. Say we knew the USGS gage ID that we want NHDPlus data upstream of. We can use the NLDI to navigate from the gage the same as we did our comid. We can also get back all the nwis sites the NLDI knows about upstream of the one we chose!
nldi_feature <- list(featureSource = "nwissite", featureID = "USGS-05428500") flowline_nldi <- navigate_nldi(nldi_feature, mode = "upstreamTributaries", data_source = "") output_file_nwis <- file.path(temp_dir, "subset_download_nwis.gpkg") output_file_nwis <-subset_nhdplus(comids = flowline_nldi$nhdplus_comid, output_file = output_file_nwis, nhdplus_data = "download") sf::st_layers(output_file_download) flowline_nwis <- sf::read_sf(output_file_nwis, "NHDFlowline_Network") upstream_nwis <- navigate_nldi(nldi_feature, mode = "upstreamTributaries", data_source = "nwissite") plot(sf::st_geometry(flowline_nwis), lwd = 3, col = "blue") plot(sf::st_geometry(upstream_nwis), cex = 1, lwd = 2, col = "red", add = TRUE)
With the starting COMID we found with
discover_nhdplus_id above, we can use one of the network navigation functions,
get_DD to retrieve a collection of comids along the upstream mainstaem, upstream with tributaries, downstream mainstem, or downstream with diversions network paths. Here we'll use upstream with tributaries.
UT_comids <- get_UT(flowline, start_comid) UT_comids
If you are familiar with the NHDPlus, you will recognize that now that we have this list of COMIDs, we could go off and do all sorts of things with the various flowline attributes. For now, let's just use the COMID list to filter our
data.frame and plot it with our other layers.
plot(sf::st_geometry(flowline)) plot(start_point, cex = 1.5, lwd = 2, col = "red", add = TRUE) plot(sf::st_geometry(dplyr::filter(flowline, COMID %in% UT_comids)), add=TRUE, col = "red", lwd = 2)
Say you want to save the network subset for later use in R or in some other GIS. The subset_nhdplus() function is your friend. If you have the whole national seamless database downloaded, you can pull out large subsets of it like shown below. If you don't have the whole national seamless, look at the second example in this section.
output_file <- file.path(temp_dir, "subset.gpkg") output_file <-subset_nhdplus(comids = UT_comids, output_file = output_file, nhdplus_data = nhdplus_path()) sf::st_layers(output_file)
Now we have an output geopackage that can be used later. It contains the network subset of catchments and flowlines as well as a spatial subset of other laters as shown in the status output above. To complete the demonstration, here are a couple more layers plotted up.
catchment <- sf::read_sf(output_file, "CatchmentSP") waterbody <- sf::read_sf(output_file, "NHDWaterbody") plot(sf::st_geometry(flowline)) plot(start_point, cex = 1.5, lwd = 2, col = "red", add = TRUE) plot(sf::st_geometry(dplyr::filter(flowline, COMID %in% UT_comids)), add=TRUE, col = "red", lwd = 2) plot(sf::st_geometry(catchment), add = TRUE) plot(sf::st_geometry(waterbody), col = rgb(0, 0, 1, alpha = 0.5), add = TRUE)
Expect more in this space as
nhdplustTools progresses. Right now, one indexing method has been implemented. Using the data above, we can use the
get_flowline_index() function to get the comid, reachcode, and measure of our starting point like this.
get_flowline_index() will work with a list of points too. For demonstration purposes, we can use the gages in our subset from above.
gage <- sf::read_sf(output_file, "Gage") get_flowline_index(flowline, sf::st_geometry(gage), precision = 10)
For more info about
get_flowline_index() see the article
vignette("point_indexing") about it or the reference page that describes it.
The NHDPlus tools package has been developed in support of an experimental NHDPlus refactoring workflow to normalize the size of catchments and resolve particular network locations. If this work is of interest, it can be found in the network_refactor branch of this repository.