Giter Site home page Giter Site logo

unidata / tds Goto Github PK

View Code? Open in Web Editor NEW
62.0 14.0 24.0 936.8 MB

THREDDS Data Server

Home Page: https://docs.unidata.ucar.edu/tds/5.0/userguide/index.html

License: BSD 3-Clause "New" or "Revised" License

Java 95.94% HTML 1.54% Makefile 0.01% Shell 0.09% C++ 0.03% Jupyter Notebook 0.23% XSLT 0.30% JavaScript 0.20% CSS 0.48% AGS Script 0.92% PowerShell 0.02% Yacc 0.20% Awk 0.04%
thredds geoscience geodata tds geospatial-data java thredds-catalogs hacktoberfest

tds's Introduction

TDS icon

THREDDS Data Server (TDS)

The THREDDS Data Server (TDS) provides metadata and data access to scientific datasets. Datasets can be served through OPeNDAP, OGC's WMS and WCS, HTTP, and other remote data access protocols. It can be configured to aggregate a collection of datasets so the collection is seen as a single dataset when viewed through the various data access protocols. The TDS is a server-based system that can be easily installed in any servlet container such as Apache Tomcat.

For more information about the TDS, see the TDS web page at

You can obtain a copy of the latest released version of TDS software from

A mailing list, [email protected], exists for discussion of the TDS and THREDDS catalogs including announcements about TDS bugs, fixes, enhancements, and releases. To subscribe, send a blank email to [email protected] and respond to the confirmation email. Mailing list archives are available at:

We appreciate feedback from users of this package. Please send comments, suggestions, and bug reports to [email protected]. Please identify the version of the package.

THREDDS Catalogs

THREDDS Catalogs can be thought of as representing logical directories of on-line data resources. They are encoded as XML and provide a place for annotations and other metadata about the data resources. These XML documents are how THREDDS-enabled data consumers find out what data is available from data providers.

THREDDS Catalog documentation (including the specification) is available at

Licensing

The THREDDS Data Server is released under the BSD-3 licence, which can be found can be found here

Furthermore, this project includes code from third-party open-source software components:

  • Gretty: for details, see buildSrc/README.md
  • JUnit: for details, see tds-test-utils/README.md

Each of these software components have their own license. Please see docs/src/private/licenses/third-party/.

Previous releases

Prior to v5.0.0, the netCDF-Java/CDM library and the THREDDS Data Server (TDS) have been built and released together. Starting with version 5, these two packages have been decoupled, allowing new features or bug fixes to be implemented in each package separately, and released independently. Releases prior to v5.0.0 were managed at https://github.com/unidata/thredds, which holds the combined code based used by v4.6 and earlier.

tds's People

Contributors

barronh avatar bencaradocdavies avatar cofinoa avatar cskarby avatar cwardgar avatar danfruehauf avatar dennisheimbigner avatar donmurray avatar dopplershift avatar ennawilson avatar ethanrd avatar geojs avatar hvandam2 avatar jlcaron avatar johnlcaron avatar julienchastang avatar lesserwhirls avatar madry avatar massonjr avatar mhermida avatar mnlerman avatar oxelson avatar petejan avatar rkambic avatar rschmunk avatar skaymen avatar tdrwenski avatar tkunicki avatar vegasm avatar yuanho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tds's Issues

Default Jupyter service reads full dataset to find ndims

To report a non-security related issue, please provide:

  • the version of the software with which you are encountering an issue
    THREDDS Version 5.0.0-SNAPSHOT - 2020-06-25T10:27:45-0600

  • environmental information (i.e. Operating System, compiler info, java version, python version, etc.)
    siphon 0.8.0, jupyterlab

  • a description of the issue with the steps needed to reproduce it

  1. Downloaded the Jupyter notebook from the MRMS reflectivity collection
  2. Ran notebook to get the widget for selecting fields. Clicked on the reflectivity variable
  3. Ran the cell that does access and plotting

I received the output and error, below. It looks like the the line to check ndims requests the full variable data with var[:]. The server, quite sensibly, refuses to return that much data.

MergedBaseReflectivityQC_altitude_above_msl
['time', 'altitude_above_msl', 'lat', 'lon']
(22001, 1, 3500, 7000)
---------------------------------------------------------------------------
HTTPError                                 Traceback (most recent call last)
<ipython-input-15-3529a7eb11df> in <module>
      9 canPlot = var.dtype == np.uint8 or np.can_cast(var.dtype, float, "same_kind") # Only plot numeric types
     10 if (canPlot):
---> 11     ndims = np.squeeze(var[:]).ndim
     12     # for one-dimensional data, print value
     13     if (ndims == 0):

~/miniconda3/envs/glmval/lib/python3.6/site-packages/siphon/cdmr/dataset.py in __getitem__(self, ind)
    180 
    181             # Get the data for our request. We assume we only get 1 message.
--> 182             messages = self.dataset.cdmr.fetch_data(**{self.path: ind})
    183             arr = messages[0]
    184 

~/miniconda3/envs/glmval/lib/python3.6/site-packages/siphon/cdmr/cdmremote.py in fetch_data(self, **var)
     34                           for name, ind in var.items())
     35         query = self.query().add_query_parameter(req='data', var=varstr)
---> 36         return self._fetch(query)
     37 
     38     def fetch_header(self):

~/miniconda3/envs/glmval/lib/python3.6/site-packages/siphon/cdmr/cdmremote.py in _fetch(self, query)
     19 
     20     def _fetch(self, query):
---> 21         return read_ncstream_messages(BytesIO(self.get_query(query).content))
     22 
     23     def fetch_capabilities(self):

~/miniconda3/envs/glmval/lib/python3.6/site-packages/siphon/http_util.py in get_query(self, query)
    400         """
    401         url = self._base[:-1] if self._base[-1] == '/' else self._base
--> 402         return self.get(url, query)
    403 
    404     def url_path(self, path):

~/miniconda3/envs/glmval/lib/python3.6/site-packages/siphon/http_util.py in get(self, path, params)
    485                                      'Server Error ({1:d}: {2})'.format(resp.request.url,
    486                                                                         resp.status_code,
--> 487                                                                         text))
    488         return resp
    489 

HTTPError: Error accessing https://thredds-test.unidata.ucar.edu/thredds/cdmremote/grib/NCEP/MRMS/BaseRef/TP?req=data&var=%2FMergedBaseReflectivityQC_altitude_above_msl
Server Error (403: Request Too Large: RequestTooLarge: Len greater that 100M )

The notebook served by thredds-test seems to be the default since it matches the notebook in this repo.

While accessing var.ndims works directly, it seems the code instead should mimic the squeeze operation. So, I propose the access/plot block in the default notebook be changed to:

var = dataset.variables[var_name.value]
# display information about the variable
print(var.name)
print(list(var.dimensions))
print(var.shape)

%matplotlib inline
# attempt to plot the variable
canPlot = var.dtype == np.uint8 or np.can_cast(var.dtype, float, "same_kind") # Only plot numeric types
if (canPlot):
    ndims = len([s for s in var.shape if s > 1])
    # for zero-dimensional data, print value
    if (ndims == 0):
        print(var.name, ": ", var)
    # for one-dimensional data, make a line plot
    elif (ndims == 1):
        plt.plot(np.squeeze(np.array([range(len(np.squeeze(var[:])))])), np.squeeze(var[:]), 'bo', markersize=5)
        plt.title(var.name)
        plt.show()
    # for two-dimensional data, make an image
    elif (ndims == 2):
        plt.imshow(var[:])
        plt.title(var.name)
        plt.show()
    # for three or more dimensional data, print values
    else:
        print("Too many dimensions - Cannot display variable: ", var.name)
        print(var)
else:
    print("Not a numeric type - Cannot display variable: ", var.name)
    print(var)

"large" NcML in catalog not working with ChronicleMap (was: Nested aggregation failed in TDS 5)

With the latest TDS 5 from :
https://artifacts.unidata.ucar.edu/repository/unidata-snapshots/edu/ucar/tds/5.0.0-SNAPSHOT/tds-5.0.0-20180405.122614-436.war

I have the following error message with OpenDAP (or WMS, HTTPServer) with this URL :
http://localhost:8080/thredds/dodsC/ww3.nc.html

Error {
    code = 500;
    message = "barf with read size=1271 in.available=1024";
};

The dataset is a union with 2 aggregations by scanning directories.
This configuration works with TDS 4.6.11.

Configuration :

  • tomcat-8.5.28
  • oracle jdk1.8.0_121
  • catalog.xml
  <dataset name="ww3" ID="ww3" urlPath="ww3.nc">
    <metadata inherited="true">
      <serviceName>all</serviceName>
      <dataType>Grid</dataType>
    </metadata>
   <netcdf xmlns="http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2">
    <aggregation type="union">
       <netcdf>
        <aggregation dimName="time" type="joinExisting">    
         <scan location="/ww3/dp/" suffix="*.nc" subdirs="false"/>
        </aggregation>
       </netcdf> 
       <netcdf>
        <aggregation dimName="time" type="joinExisting">    
         <scan location="/ww3/hs/" suffix="*.nc" subdirs="false"/>
        </aggregation>
       </netcdf> 
      </aggregation>
   </netcdf>   
  </dataset>

Logs, threddsServlet.log :

2018-04-06T12:12:59.672 +0200 [     39309][      10] INFO  - threddsServlet - Remote host: 127.0.0.1 - Request: "GET /thredds/dodsC/ww3.nc.html HTTP/1.1"
2018-04-06T12:12:59.717 +0200 [     39354][      10] ERROR - thredds.server.opendap.OpendapServlet - request= ReqState:
  serverClassName:    'null'
  dataSet:            'ww3.nc'
  requestSuffix:      'html'
  CE:                 ''
  compressOK:          true
  InitParameters:

java.lang.RuntimeException: barf with read size=1271 in.available=1024
	at thredds.server.catalog.tracker.DatasetExt.readExternal(DatasetExt.java:79) ~[tdcommon-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT]
	at net.openhft.chronicle.hash.serialization.impl.ExternalizableReader.read(ExternalizableReader.java:49) ~[chronicle-map-3.14.5.jar:3.14.5]
	at net.openhft.chronicle.hash.serialization.impl.ExternalizableReader.read(ExternalizableReader.java:30) ~[chronicle-map-3.14.5.jar:3.14.5]
	at net.openhft.chronicle.hash.serialization.impl.BytesAsSizedReader.read(BytesAsSizedReader.java:42) ~[chronicle-map-3.14.5.jar:3.14.5]
	at net.openhft.chronicle.map.VanillaChronicleMap.searchValue(VanillaChronicleMap.java:635) ~[chronicle-map-3.14.5.jar:3.14.5]
	at net.openhft.chronicle.map.VanillaChronicleMap.tieredValue(VanillaChronicleMap.java:569) ~[chronicle-map-3.14.5.jar:3.14.5]
	at net.openhft.chronicle.map.VanillaChronicleMap.optimizedGet(VanillaChronicleMap.java:521) ~[chronicle-map-3.14.5.jar:3.14.5]
	at net.openhft.chronicle.map.VanillaChronicleMap.get(VanillaChronicleMap.java:470) ~[chronicle-map-3.14.5.jar:3.14.5]
	at thredds.server.catalog.tracker.DatasetTrackerChronicle.findResourceControl(DatasetTrackerChronicle.java:185) ~[tdcommon-5.0.0-SNAPSHOT.jar:5.0.0-SNAPSHOT]
	at thredds.core.DatasetManager.resourceControlOk(DatasetManager.java:368) ~[classes/:5.0.0-SNAPSHOT]
	at thredds.core.DatasetManager.openNetcdfFile(DatasetManager.java:133) ~[classes/:5.0.0-SNAPSHOT]
	at thredds.core.TdsRequestedDataset.openAsNetcdfFile(TdsRequestedDataset.java:121) ~[classes/:5.0.0-SNAPSHOT]
	at thredds.core.TdsRequestedDataset.getNetcdfFile(TdsRequestedDataset.java:68) ~[classes/:5.0.0-SNAPSHOT]
	at thredds.server.opendap.OpendapServlet.getDataset(OpendapServlet.java:878) ~[classes/:5.0.0-SNAPSHOT]
	at thredds.server.opendap.OpendapServlet.doGetHTML(OpendapServlet.java:552) ~[classes/:5.0.0-SNAPSHOT]
	at thredds.server.opendap.OpendapServlet.doGet(OpendapServlet.java:171) [classes/:5.0.0-SNAPSHOT]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_66]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_66]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_66]
	at java.lang.reflect.Method.invoke(Method.java:497) ~[?:1.8.0_66]
	at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) [spring-web-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133) [spring-web-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97) [spring-webmvc-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827) [spring-webmvc-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738) [spring-webmvc-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) [spring-webmvc-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967) [spring-webmvc-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) [spring-webmvc-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) [spring-webmvc-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861) [spring-webmvc-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:635) [servlet-api.jar:?]
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) [spring-webmvc-4.3.13.RELEASE.jar:4.3.13.RELEASE]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) [servlet-api.jar:?]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [catalina.jar:8.5.28]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.28]
	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat-websocket.jar:8.5.28]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.28]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.28]
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) [spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
	at thredds.servlet.filter.RequestBracketingLogMessageFilter.doFilter(RequestBracketingLogMessageFilter.java:53) [classes/:5.0.0-SNAPSHOT]
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) [spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
	at thredds.servlet.filter.RequestCORSFilter.doFilte 

The data :
ftp://ftp.ifremer.fr/ifremer/ww3/HINDCAST/NORGAS/2013/dp/ww3.201303_dp.nc
ftp://ftp.ifremer.fr/ifremer/ww3/HINDCAST/NORGAS/2013/dp/ww3.201304_dp.nc
ftp://ftp.ifremer.fr/ifremer/ww3/HINDCAST/NORGAS/2013/dp/ww3.201305_dp.nc

ftp://ftp.ifremer.fr/ifremer/ww3/HINDCAST/NORGAS/2013/hs/ww3.201303_hs.nc
ftp://ftp.ifremer.fr/ifremer/ww3/HINDCAST/NORGAS/2013/hs/ww3.201304_hs.nc
ftp://ftp.ifremer.fr/ifremer/ww3/HINDCAST/NORGAS/2013/hs/ww3.201305_hs.nc

Thanks

HTTPFileCache ever actually used?

Running TDS5 latest, I'm working on getting all our datasets to return under our 60 second time out limit. I have some large union aggregations that are taking too long to initialize. I can get them to work once but the next time I come around scanning all the end points in the catalog, they take too long again!

There is an element in threddsConfig.xml

  <!--
  The <HTTPFileCache> element:
  allow 10 - 20 open datasets, cleanup every 17 minutes
  used by HTTP Range requests.
  -->
  <HTTPFileCache>
    <minFiles>10</minFiles>
    <maxFiles>20</maxFiles>
    <scour>17 min</scour>
  </HTTPFileCache>

Does this actually get used anywhere? Searching around the code it doesn't appear to. What can I do to keep these unions fresh so just pulling back the .dds/.das is quick?

"Send trigger" links in the admin/debug interface should use 'test' not 'nocheck'

THREDDS-5.4-SNAPSHOT from 05-13-2022
CentOS 7, AdoptOpenJDK 11.0.15+10, Tomcat 8.5.75

When using the admin/debug interface and navigating to the admin/collection/showCollection page for a given dataset, there is a "Send trigger to datasetName" link. However, since the trigger action is set to nocheck, this trigger is only effective if there is currently no collection index for that dataset. This is virtually impossible since every option for update, except maybe never, would have built the initial index on server startup. The link cannot be used to update a dataset that you have recently added files to.

Currently the link's target is something like:

https://host/thredds/admin/collection/trigger?collection=name&trigger=nocheck

I believe is should be more like (test instead of nocheck):

https://host/thredds/admin/collection/trigger?collection=name&trigger=test

I have confirmed that if I start an FMRC, add files, then use the trigger in its existing form (nocheck), the FMRC dataset does NOT update. However, if I replace nocheck with test, the dataset updates.

"Query Dataset" Support

To report a non-security related issue, please provide:

  • the version of the software with which you are encountering an issue

5.0.0-beta7

  • environmental information (i.e. Operating System, compiler info, java version, python version, etc.)

RHEL 7, Java 1.8.0_212

  • a description of the issue with the steps needed to reproduce it

I am looking at "dynamic datasets" as described here https://www.unidata.ucar.edu/software/tds/v4.6/catalog/InvCatalogSpec.html#datasetClassification

I am aware this is documentation for 4.6, but was thinking it may still work in 5.

I can't seem to get the query dataset type to work. It is not generating a clickable link. I think my main barrier is understanding the exact meaning of the directions in the docs linked above.

Here is what I have so far in my catalog block:

...
<service name="qdataset" serviceType="Catalog" base="https://[catalog-generator-host]/catalog/project/">
  <property name="requires_authorization" value="false"/>
</service>
...
<dataset name="Test query dataset" urlPath="CMIP6">
  <serviceName>qdataset</serviceName>
</dataset>

Presumably this would build a link as follows: https://[catalog-generator-host]/catalog/project/CMIP6, which returns catalog XML.

A simple example would go a long way here. If it is no longer supported that is fine as well.

If you have a general question about the software, please view our Suggested Support Process.

BTW I wrote an email regarding something else to [email protected] awhile back (> 1 week) and did not receive any response back, that is why I am posting here. My support email is now irrelevant, so no worries.

DAP2 pointing to multidimensional variables in a Grid's Maps listing.

There appears to be a bug in the DAP2 server in which multidimensional variables are being used in a Grid's Maps listing. For example, a dataset with a CDL such as (with focus on the reftime and time related parts):

netcdf grib/NCEP/GFS/CONUS_80km/TwoD {
  dimensions:
    reftime = 123;
    time2 = 41;

  variables:
    double reftime(reftime=124);
      :units = "Hour since 2019-10-20T00:00:00Z";

    double time2(reftime=124, time2=41);
      :units = "Hour since 2019-10-20T00:00:00Z";

  float Pressure_surface(reftime=124, time2=41, y=65, x=93);
      :units = "Pa";
      :coordinates = "reftime time2 y x";

ends up with a DAP2 DDS representation that looks like:

Dataset {
    Float64 reftime[reftime = 123];
    Float64 time4[reftime = 123][time4 = 41];
    ...
    }
    Grid {
     ARRAY:
        Float32 Pressure_surface[reftime = 123][time4 = 41][y = 65][x = 93];
     MAPS:
        Float64 reftime[reftime = 123];
        Float64 time4[reftime = 123][time4 = 41];
        Float32 y[y = 65];
        Float32 x[x = 93];
    } Pressure_surface;
    ...
}

The DAP2 specification indicates that each entry in under MAPS for a Grid must be a vector (as opposed to an array).

For a real world example, check out https://thredds-test.unidata.ucar.edu/thredds/catalog/grib/NCEP/GFS/CONUS_80km/catalog.html?dataset=grib/NCEP/GFS/CONUS_80km/TwoD

NCSS and mixed interval (time) coordinates

Let's suppose that we have two grids for Total_precipitation_surface_Mixed_intervals_Accumulation valid at 2019-11-30 18Z, but one is a 6 hour accumulation and the other is a 3 hour accumulation. Now, let's say a user uses NCSS requesting a single time, 2019-11-30T18:00:00.

Given that the user supplied a single time in the request, it might be expected that we only return one grid (at least that's what we assume). If we go with that assumption, then the question becomes which grid does the server return - the one representing a 3 hour total or the one representing a 6 hour total. Currently, what the server does is compute the mid-point value from the time bounds of the two accumulation grids, and chooses the one whose mid-point is closest to the requested time (effectively the one with the smallest accumulation time period).

Now, what if a time range is given in the NCSS request instead of a single value? Well, right now NCSS bombs out and returns a message that basically says "not yet implemented". What should we do there? Maybe an extension of the same idea for a single value when deciding which interval to return for grids with the same valid time but differing time periods? But is that right, or should we return any grid that overlaps with the interval requested?

In both the single time and time range requests, it would be nice to allow a user to provide the desired accumulation period via the NCSS API.

For the single time request, what if we did something like time=2019-11-30T18:00:00&timeInterval=P6h

That would uniquely specify a 6 hour duration for the accumulation valid at 2019-11-30 18Z. timeInterval would be set using a positive W3C duration, and we'd return the grid with a valid time closest to time, and then an interval closest to that specified by the timeInterval parameter. The default (i.e. no timeInterval parameter) would behave as it does currently.

For a request with a time range (not yet implemented), by default (i.e. no timeInterval parameter), NCSS would return any accumulation with any interval intersecting the time range in the request. If timeInterval=P6h was used, NCSS would return any 6 hour accumulation intersecting the time_start and time_end parameters of the request.

can't set "content-type" for JSON files (and others)

To report a non-security related issue, please provide:

  • the version of the software with which you are encountering an issue

4.6+, 5

  • environmental information (i.e. Operating System, compiler info, java version, python version, etc.)

5.0 docker container on the latest LTS Ubuntu

  • a description of the issue with the steps needed to reproduce it

Can't get content-type righ in config:

Seems it hard code here?

tdcommon/src/main/java/thredds/util/ContentType.java

New test failures due to httpservice credential handling changes

With the changes made to httpservices in Unidata/netcdf-java/#117 (changes summarized in the PR and documented here), we have new failures in these TDS tests:

thredds.tds.FreshTdsInstallSpec
thredds.server.services.TestAdminDebug
ucar.nc2.util.net.TestSSH
ucar.nc2.util.net.TestTomcatAuth

The issue seems to revolve the way the following two classes work with credentials:

  1. tds/it/src/test/java/thredds/TestOnLocalServer.java
  2. TestProvider inner class of tds/it/src/test/java/ucar/nc2/util/net/TestSSH.java

None of these seem to indicate actual failures in the functionality of the TDS (for example, I can still access the admin/debug interface through a browser), but rather the way in which we utilize httpservices in some helper methods/classes related to setting up the client side of the tests.

NCSS validation errors

TDSv5.x

  • time parameter should be optional
  • Grid as point is not working - throws NullPointerException
  • request validation should inform user if request parameters are outside dataset (currently returns file not found error)

Throwable exception handled : java.lang.IllegalStateException: Invalid target for Validator

To report a non-security related issue, please provide:

  • the version of the software with which you are encountering an issue
# cat /usr/local/tomcat/webapps/thredds/META-INF/MANIFEST.MF 
Manifest-Version: 1.0
Implementation-Title: THREDDS Data Server (TDS)
Implementation-Version: 5.0.0-beta5
Built-By: lesserwhirls
Implementation-Vendor-Id: edu.ucar
Implementation-URL: https://www.unidata.ucar.edu/software/thredds/curr
 ent/tds/TDS.html
Created-By: Gradle 3.5.1
Build-Jdk: 1.8.0_171
Built-On: 2018-09-10T18:14:26-0600
Implementation-Vendor: UCAR/Unidata
  • environmental information (i.e. Operating System, compiler info, java version, python version, etc.)
# uname -a
Linux [hostname] 3.10.0-957.21.3.el7.x86_64 #1 SMP Fri Jun 14 02:54:29 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux

# java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-b04)
OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)

# /usr/local/tomcat/bin/catalina.sh version
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Using CATALINA_PID:    /usr/local/tomcat/logs/catalina.pid
Server version: Apache Tomcat/8.5.39
Server built:   Mar 14 2019 11:24:26 UTC
Server number:  8.5.39.0
OS Name:        Linux
OS Version:     3.10.0-957.21.3.el7.x86_64
Architecture:   amd64
JVM Version:    1.8.0_212-b04
JVM Vendor:     Oracle Corporation
  • a description of the issue with the steps needed to reproduce it

Using a homegrown remote catalog service that exists as a Flask application which generates the appropriate catalog XML by querying a back-end service. The following error occurs when clicking a dataset, a .nc link, within a catalog.

Throwable exception handled : java.lang.IllegalStateException: Invalid target for Validator [thredds.server.catalogservice.RemoteCatalogRequestValidator@568fb823]: thredds.server.catalogservice.DatasetContext@754d4e68
	at org.springframework.validation.DataBinder.assertValidators(DataBinder.java:568)
	at org.springframework.validation.DataBinder.setValidator(DataBinder.java:558)
	at thredds.server.catalogservice.RemoteCatalogServiceController.initBinder(RemoteCatalogServiceController.java:58)
...

The relevant function org.springframework.validation.DataBinder.assertValidators looks like this: https://github.com/spring-projects/spring-framework/blob/3a0f309e2c9fdbbf7fb2d348be861528177f8555/spring-context/src/main/java/org/springframework/validation/DataBinder.java#L538

Though, not the same springframework version, it is likely still relevant.

  • Desired behavior

Get directed to an interface like this:
https://aims3.llnl.gov/thredds/catalog/esgcet/298/CMIP6.CMIP.E3SM-Project.E3SM-1-0.historical.r5i1p1f1.Amon.rsutcs.gr.v20190731.html?dataset=CMIP6.CMIP.E3SM-Project.E3SM-1-0.historical.r5i1p1f1.Amon.rsutcs.gr.v20190731.rsutcs_Amon_E3SM-1-0_historical_r5i1p1f1_gr_200001-201412.nc

Nested aggregation fails with NPE in 5.4 SNAPSHOT. Worked correctly in 4.6.20.

This concerns TDS 5.4-SNAPSHOT

Implementation-Title: THREDDS Data Server (TDS)
Implementation-Version: 5.4-SNAPSHOT

Running on Red Hat Enterprise Linux Workstation release 6.10 (Santiago)
with
openjdk version "11.0.12" 2021-07-20
OpenJDK Runtime Environment 18.9 (build 11.0.12+7)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7, mixed mode)

The attached catalog (called test_B10k.txt for github upload reasons) worked correctly with:
Implementation-Title: THREDDS Data Server (TDS)
Implementation-Version: 4.6.20

But, the nested aggregation fails with 5.4.

2022-05-19T16:00:24.463 +0000 [     71600][       5] ERROR - thredds.server.opendap.OpendapServlet - request= ReqState:
  serverClassName:    'null'
  dataSet:            'B10K-H16_CMIP5_Level1_CESM_BIO_rcp85_collection.nc'
  requestSuffix:      'html'
  CE:                 ''
  compressOK:          true
  InitParameters:

java.lang.NullPointerException: null
        at thredds.inventory.MFileProvider.canProvide(MFileProvider.java:21) ~[cdm-core-5.5.3-SNAPSHOT.jar:5.5.3-SNAPSHOT]

You can download the files in order to test from these links:

https://data.pmel.noaa.gov/aclim/thredds/fileServer/B10K-H16_CMIP5_CESM_BIO_rcp85/Level1/2005-2009/B10K-H16_CMIP5_CESM_BIO_rcp85_2005-2009_average_aice.nc
https://data.pmel.noaa.gov/aclim/thredds/fileServer/B10K-H16_CMIP5_CESM_BIO_rcp85/Level1/2010-2014/B10K-H16_CMIP5_CESM_BIO_rcp85_2010-2014_average_aice.nc
https://data.pmel.noaa.gov/aclim/thredds/fileServer/B10K-H16_CMIP5_CESM_BIO_rcp85/Level1/2015-2019/B10K-H16_CMIP5_CESM_BIO_rcp85_2015-2019_average_aice.nc

https://data.pmel.noaa.gov/aclim/thredds/fileServer/B10K-H16_CMIP5_CESM_BIO_rcp85/Level1/2005-2009/B10K-H16_CMIP5_CESM_BIO_rcp85_2005-2009_average_Ben.nc
https://data.pmel.noaa.gov/aclim/thredds/fileServer/B10K-H16_CMIP5_CESM_BIO_rcp85/Level1/2010-2014/B10K-H16_CMIP5_CESM_BIO_rcp85_2010-2014_average_Ben.nc
https://data.pmel.noaa.gov/aclim/thredds/fileServer/B10K-H16_CMIP5_CESM_BIO_rcp85/Level1/2015-2019/B10K-H16_CMIP5_CESM_BIO_rcp85_2015-2019_average_Ben.nc

test_B10k.txt

How to define a new palette in TDS5/ncWMS2? (v5.0.0-alpha3)

I'm using v5.0.0-alpha3

I have a (very simple) custom colour palette that I was using successfully in TDS4/ncWMS1, however I can't figure out how to do so with TDS5/ncWMS2.

Previously in TDS4, I simply defined my new palette by adding my .pal file to WEB-INF/palettes and restarting.

Now, there are a couple of things that are confusing in TDS5:

  • Despite the ~8 palettes included in WEB-INF/palettes, only default appears in GetCapabilities request (there are a few styles but in the form default-scalar/default, colored_contours/default, etc.).
  • A GetMetadata request returns 94 palette styles.
  • A GetMap request using any of the above mentioned 94 styles returns successfully despite not being declared in GetCapabilities (although perhaps this is a feature; given that styles take the form type/palette the number of combinations if they were to be listed explicitly might be a little wordy..).
  • Some of the palettes defined in WEB-INF/palettes are not present in the list returned by GetMetadata, so it seems the WEB-INF/palettes may not actually be used at all?

I have tried setting alternate paletteLocationDir in threddsConfig.xml but seems to have no effect.

I'm trying to find where some of these new palettes are coming from (e.g. psu-magma) but had no luck yet.

Any tips much appreciated!
Thanks,
Dan

How to properly configure for a dataset on S3?

I am having difficulty figuring out the proper configuration for a dataset stored on S3. There is no documented example of this, and the only thing I can find to work off of is https://github.com/Unidata/tds/blob/main/tds/src/test/content/thredds/tds-s3.xml. But it isn't clear to me how to adapt that example to our own S3 bucket, let alone the fact that this file appears to use two dataset roots, but only has one defined.

I am using the latest tds docker container, and rather than focusing on my own particular catalogue.xml, I am hoping to get the ball rolling on producing a well-explained, clean example to add to the documentation.

TDS 5.4 featureCollection question

Since I moved to the recent version of TDS 5.4-SNAPSHOT (the docker image) my old setup doesn't work anylonger. I used 4.6.20-SNAPSHOT before where it worked just fine. Now the problem is as follows:

here is a description of my collection:

  <featureCollection name="Operational Forecast on Model Levels"
    path="operational/forecast/modellevel"
    featureType="GRIB2">

    <metadata inherited="true">
      <serviceName>allServices</serviceName>
      <documentation type="summary">ECMWF operational forecasts on model levels.</documentation>
      <dataFormat>GRIB-2</dataFormat>
    </metadata>

    <collection name="op_fc_ml"
      spec="/data/thredds-testmerged/ecmwf/operational/fc/ml/**/ecmwf_fc_ml_0\.5x0\.5.*\.grb$"
      dateFormatMark="#_0.5x0.5_#yyyyMMdd-HHmm"
      timePartition="file"
      olderThan="15 min"/> 

    <update startup="never" trigger="allow"/> 
    <tdm rewrite="test" rescan="0 0/5 * * * ? *"/>

    <gribConfig datasetTypes="Files" />

  </featureCollection> 

For a test purpose i have prepared a list of 4 GRIB-2 files:

ecmwf_fc_ml_0.5x0.5_20220101-0000.grb
ecmwf_fc_ml_0.5x0.5_20220102-0000.grb
ecmwf_fc_ml_0.5x0.5_20220103-0000.grb
ecmwf_fc_ml_0.5x0.5_20220104-0000.grb

Each GRIB2 file has its own dataDate corresponding to the date in the filename, dataTime field equal to 0 and a forecastTime - fixed value from the list: [ 6, 12, 18, 24 ]:

***** FILE: ecmwf_fc_ml_0.5x0.5_20220104-0000.grb
#==============   MESSAGE 1 ( length=521140 )              ==============
GRIB {
  # Meteorological products  (grib2/tables/27/0.0.table)
  discipline = 0;
  editionNumber = 2;
  # Start of forecast  (grib2/tables/27/1.2.table)
  significanceOfReferenceTime = 1;
  dataDate = 20220104;
  dataTime = 0;
....
  # Hour (stepUnits.table)
  stepUnits = 1;
  forecastTime = 6;
  stepRange = 6;

As long as I put timePartition parameter in my collection definition to"file", I expect the resulting per-file OpenDAP dataset does not depend on other files and having exactly 4 above values (which is only true with 1 GRIB file in the collection). What I get in my setup with 4 files is an array of concatenated and growing numbers. So that every new file adds exactly 4 new values to all other's datasets. Here is an example:

Dataset {
    Float64 time[time = 16];
} operational/forecast/modellevel/op_fc_ml-202201/ecmwf_fc_ml_0.5x0.5_20220104-0000.grb;
---------------------------------------------
time[16]
6.0, 12.0, 18.0, 24.0, 30.0, 36.0, 42.0, 48.0, 54.0, 60.0, 66.0, 72.0, 78.0, 84.0, 90.0, 96.0

The array is the same for each of those 4 files.

Additionally if i access my dataset via ncdump, I see that for all 4 files I get the same starting time equal to the first file in the collection. Please have a look at time:units value

$>ncdump -v time https://mythredds/thredds/dodsC/operational/forecast/modellevel/op_fc_ml-202201/ecmwf_fc_ml_0.5x0.5_20220104-0000.grb 

netcdf ecmwf_fc_ml_0.5x0.5_20220104-0000 {
dimensions:
        hybrid = 137 ;
        hybrid1 = 1 ;
        lat = 361 ;
        lon = 720 ;
        time = 16 ;
variables:
        float lat(lat) ;
                lat:units = "degrees_north" ;
                lat:_CoordinateAxisType = "Lat" ;
        float lon(lon) ;
                lon:units = "degrees_east" ;
                lon:_CoordinateAxisType = "Lon" ;
        double time(time) ;
                time:units = "Hour since 2022-01-01T00:00:00Z" ;
                time:standard_name = "time" ;
                time:long_name = "GRIB forecast or observation time" ;
                time:calendar = "proleptic_gregorian" ;
                time:_CoordinateAxisType = "Time" ;

The docu from here: https://www.unidata.ucar.edu/software/tds/current/reference/collections/Partitions.html says:

file: each file is a partition.
...
File Partition
In order to use a file partition, all of the records for a reference time must be contained in a single file. 
The common case is that each file contains all of the records for a single reference time.

Do I do something weird or it is a feature of TDS5 that the files are not handled independently from each other? Pls advise..

OpenDAP DAta Url form set to http and not https when behind a proxy

Environment

  • the version of the software with which you are encountering an issue
    • TDS Docker container 5.3
  • Environment
    AWS EC2 behind a load balancer (important)
root@1c607a9eb9d0:/usr/local/tomcat# bin/catalina.sh version
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /usr/local/openjdk-11
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Using CATALINA_OPTS:
NOTE: Picked up JDK_JAVA_OPTIONS:  --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
Server version: Apache Tomcat
Server built:   Jan 17 2022 22:07:47 UTC
Server number:  8.5.75.0
OS Name:        Linux
OS Version:     4.14.268-205.500.amzn2.x86_64
Architecture:   amd64
JVM Version:    11.0.13+8
JVM Vendor:     Oracle Corporation

Issue

we run a load balancer in AWS that maps to one or more EC2 instance running the TDS docker image. The loadbalancer expects https connections from the user, but this is where SSL termination occurs, and load balancer to the TDS containers is all over HTTP. When a user navigates to the opendap pages (and probably others, but this is where we are), the 'data url' is not secure- it's using http and i can't find a way to override this with 'https'.

Because the data form url is not https, no commands to 'get binary' or 'get ascii' actually work. the get ASCII button works (presumably because the forwarding from http to https is allowed via the browser) but the get Binary button doesn't work unless we manually set the protocol to https.

Screen Shot 2022-04-12 at 1 53 54 PM

Is there a way to 'force' https in the data form url? I think this would also manifest itself in the ncsubset forms, but i can't confirm that right now (we've turned them off). i'm aware the OPeNDAP had this same issue, and now has a configuration element for it:

Hyrax/OPeNDAP Option to fix this

element (optional)

'ForceDataRequestFormLinkToHttps' - The presence of this element will cause the Data Request Form interfaces to "force" the dataset URL to HTTPS. This is useful for situations where the sever is sitting behind a connection management tool (like CloudFront) whose outward facing connections are HTTPS but Hyrax is not using HTTPS. Thus the internal URLs being received by Hyrax are on HTTP. When these URLs are exposed via the Data Request Forms they can cause some clients issues with session dropping because the protocols are not consistent.

OPeNDAP Common Problems

Is the above available to the THREDDS service? If so, in which configurations should it be placed.

Contribute Jupyter Notebook dataset viewers to the TDS

About

Unidata is looking for Python programmers to contribute Jupyter Notebook dataset viewers to the TDS Jupyter Notebook service.
To contribute to this issue, create a Jupyter Notebook (or use a notebook or Python script you've already written! ) that does the following:

  1. accesses data from the TDS (see the Siphon docs for guidance on accessing data)
  2. does cool things with accessed data (e.g. visualizations, analyses)

We're looking for two categories of notebooks:

  1. Generic viewers that would be useful in a typical THREDDS Data Server, e.g. a Notebook that plots any gridded data. These viewers will packaged with the TDS and distributed as default viewers. If your viewer fits this category, continue on to "Contributing to this issue".
  2. Viewers for specific datasets included in the Unidata THREDDS Data Server, e.g. a viewer for all GFS Quarter Degree Forecast datasets. If you have a viewer that does cool things with any dataset found here, see TdsConfig#93 instead.

Contributing to this issue

To contribute a Notebook that will be included as a default viewer in the TDS (type 1 above), submit a pull request that includes your .ipynb file in the startup directory.

Please see the Unidata Contributors Guide for guidelines on how to submit a pull request.

Helpful links

  • To learn more about Jupyter Notebooks, visit the project docs.

  • To learn more about configuring and contributing Notebook viewers, read the docs.

  • To see an example of a Notebook viewer, check out the default TDS notebook viewer

  • The Siphon package is an excellent way to get started working with the TDS in Python.

Thank you for considering contributing!

Add JSON response for WMS GetFeatureInfo request

How to define new styles in TDS5/ncWMS2?

I'm using v5.0.0-alpha3.

I'm trying to add new styles to TDS5, but I haven't been able to do so.

The styles and the palettes used in TDS5 are contained in edal-graphics-1.2.7.jar. How can new styles be added without touching that jar file?

Thank you!!

Tracking recent log4j issues

Hello THREDDS users - this issue is being opened to keep users who are not subscribed to the mailing list updated on the log4j and TDS saga.

As of December 18th, 2021, the recommended releases of the TDS are snapshot releases, 5.3-SNAPSHOT and 4.6.19-20211218.154246-4. Both can be found on the TDS downloads page. These releases use log4j 2.17.0 and address CVE-2021-44228, CVE-2021-45046, and CVE-2021-45105.

The THREDDS team plans to release an official (non-snapshot) release of both TDS 5.x and 4.6.x next week, however there is no difference between a snapshot and a full release other than the process of naming and archiving the version. The snapshots available are complete and stable.

We will keep you updated here as the situation progresses.

best,
THREDDS development team

Issues with NCSS under TDS5

Under TDS4 and TDS5, trying NCSS with a NetCDF file that uses two-dimensional coordinate variables, as described by the CF conventions, produces two different responses:

Under TDS4.latest, all is well, and I can extract values from the file:

TDS_4

Under TDS5 SNAPSHOT-5.4: I run into the following problems (some are the same as #169):

  • Map of the horizontal extent is wrong
  • It is asking for time, while time is supposed to be optional

TDS_5

  • Can display variables correctly, but when using the same
    limits as I used with TDS4, it throws InvalidRangeException

Error_range

  • When changing the horizontal extent to full extension, I receive the "Dimension y does not exist error."

Error_y

FMRC Download index file for <dataset>

THREDDS-5.4-SNAPSHOT from 05-13-2022
CentOS 7, AdoptOpenJDK 11.0.15+10, Tomcat 8.5.75

When using the admin/debug interface and navigating to the admin/collection/showCollection page for a given dataset, there is a "Download index file for datasetName" link. However, clicking the link always gives the error:

_datasetName_ NOT FOUND

even for a working fully updated FMRC dataset.

Feature Request: Prometheus Exporter

Feature Request

The monitoring tool Prometheus is gaining popularity and is being adopted by major software projects, Solr and JupyterHub, for example.

The model is a central Prometheus server "scrapes" (pulls) from "exporters" via HTTP and stores the metrics collected. Exporters are generally integrated into services. Thredds is an excellent candidate for an exporter.

Prometheus sets a number of standards and strongly recommends they are followed. They lay out the Best Practices for Exporters nicely. With the existence of 'tdsMonitor' this may be an easy feature to add. I am not familiar with an inner workings of TDS, just a guess.

How to configure the WMS for 5.0?

Hey dear TDS developers!

we are running a THREDDS-Server using unidata/thredds-docker:5.0-beta8 and would like to use the wms functionalities there. However, as I understood from Unidata/thredds#909 (and confirmed myself via manipulating the wmsConfig.xml file), the docs for 5.0 on configuring the wms are outdated here. Do you have any starting point where I can learn how to configure the WMS, e.g. how to change the default style for specific variables and where this config has to be placed?

Thanks for your support!

TDM with no -tds flag

We should improve the TDM to provide a warning (or error) if the the tdm configuration elements in the catalogs say to expect a trigger, but the -tds flag was not used when starting the TDM.

Why is TDS v5.0 still in beta version?

Hello wonderful developers at unidata!

We would like to install a THREDDS server in our institute and are wondering about the release policy of TDS. Can we savely install the latest beta of TDS (v5.0.0-beta7) on our server? We would like to use this due to the new and better features (e.g. the upload and download functionalities). Or should we better use 4.6 because it is the latest stable release and therefore (eventually) more secure?

A general question: According to the blog, version 5 should have been released in May 2018 already, but we could not find any further information about the progress. Can you say anything about the progress of making a stable v5.0.0 release?

Best regards and thanks a lot for your great services to the netCDF-Community!

DAP4 exception when a ncml file it's served

When a file in .ncml format it's been accessed using DAP4 service, the following exception it's been raised:

 <Error httpcode="400"> <Message>dap4.core.util.DapException: CDMDSP: cannot process: /oceano/gmeteo/WORK/zequi/DATASETS/cmip5-esm-subset-day/ncmls/cmip6_CMIP_BCC_BCC-CSM2-MR_esm-hist_r1i1p1f1_gn.ncml at dap4.cdm.dsp.CDMDSP.open(CDMDSP.java:121) at dap4.cdm.dsp.CDMDSP.open(CDMDSP.java:36) at dap4.servlet.DapCache.open(DapCache.java:94) at dap4.servlet.DapController.doDMR(DapController.java:307) at dap4.servlet.DapController.handleRequest(DapController.java:230) at thredds.server.dap4.Dap4Controller.handleRequest(Dap4Controller.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:849) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:760) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861) at javax.servlet.http.HttpServlet.service(HttpServlet.java:622) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) at thredds.servlet.filter.RequestBracketingLogMessageFilter.doFilter(RequestBracketingLogMessageFilter.java:53) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at thredds.servlet.filter.RequestCORSFilter.doFilterInternal(RequestCORSFilter.java:53) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at thredds.servlet.filter.RequestQueryFilter.doFilter(RequestQueryFilter.java:92) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at thredds.servlet.filter.HttpHeadFilter.doFilter(HttpHeadFilter.java:46) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.logging.log4j.web.Log4jServletFilter.doFilter(Log4jServletFilter.java:71) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:318) at org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:192) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502) at org.apache.coyote.ajp.AbstractAjpProcessor.process(AbstractAjpProcessor.java:877) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1519) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1475) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Caused by: dap4.core.util.DapException: java.io.IOException: java.io.IOException: Cant read /oceano/gmeteo/WORK/zequi/DATASETS/cmip5-esm-subset-day/ncmls/cmip6_CMIP_BCC_BCC-CSM2-MR_esm-hist_r1i1p1f1_gn.ncml: not a valid CDM file. at dap4.cdm.dsp.CDMDSP.createNetcdfFile(CDMDSP.java:1090) at dap4.cdm.dsp.CDMDSP.open(CDMDSP.java:117) ... 64 more Caused by: java.io.IOException: java.io.IOException: Cant read /oceano/gmeteo/WORK/zequi/DATASETS/cmip5-esm-subset-day/ncmls/cmip6_CMIP_BCC_BCC-CSM2-MR_esm-hist_r1i1p1f1_gn.ncml: not a valid CDM file. at ucar.nc2.NetcdfFile.open(NetcdfFile.java:498) at dap4.cdm.dsp.CDMDSP.createNetcdfFile(CDMDSP.java:1081) ... 65 more Caused by: java.io.IOException: Cant read /oceano/gmeteo/WORK/zequi/DATASETS/cmip5-esm-subset-day/ncmls/cmip6_CMIP_BCC_BCC-CSM2-MR_esm-hist_r1i1p1f1_gn.ncml: not a valid CDM file. at ucar.nc2.NetcdfFile.open(NetcdfFile.java:895) at ucar.nc2.NetcdfFile.open(NetcdfFile.java:495) ... 66 more </Message> <Context>http://spock.meteo.unican.es/tds5/dap4/cmip6/cmip6_CMIP_BCC_BCC-CSM2-MR_esm-hist_r1i1p1f1_gn.ncml.dmr.xml</Context>

The example endpoint can be found here http://spock.meteo.unican.es/tds5/catalog/catalogs/cmip6.html?dataset=cmip6_CMIP_BCC_BCC-CSM2-MR_esm-hist_r1i1p1f1_gn

@zequihg50

Complete content retrieval on HTTP HEAD requests puts an unnecessary burden on tds

According to https://tools.ietf.org/html/rfc7231#section-4.3.2 payload headers are optional for HTTP HEAD responses.

Content-Length is a payload header according to https://tools.ietf.org/html/rfc7231#section-3.3

In

HttpServletResponse httpServletResponse = (HttpServletResponse) response;
NoBodyResponseWrapper noBodyResponseWrapper = new NoBodyResponseWrapper(httpServletResponse);
chain.doFilter(new ForceGetRequestWrapper(httpServletRequest), noBodyResponseWrapper);
noBodyResponseWrapper.setContentLength();
a complete GET-request is processed to compute the Content-Length, and the HTTP body is discarded. This seems like a waste of resources, especially for large datasets, possibly spanning several files (e.g. via ncml aggregates). I think it is better to handle this explicitly by having a pair of functions: one to set the http headers (except for payload headers), and call this function from get functions, this way we can give swift responses back on HTTP HEAD requests (and save resources on the server side.)

Announcement: switch default branch to "main"

The default branch for the TDS repository will be moving from master to main on May 28th, 2021. The main branch exists now, and any existing workflows can be migrated over starting today.

ChronicleMap - Attempt to allocate #3 extra segment tier, 2 is maximum

I keep seeing this error. This FMRC worked fine last night but this morning I am getting the following error:

ERROR ucar.nc2.ft.fmrc.Fmrc: makeFmrcInv
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: ChronicleMap{name=GridDatasetInv, file=/usr/local/tomcat/content/thredds/cache/collection/GridDatasetInv.dat, identityHashCode=401224888}: Attempt to allocate #3 extra segment tier, 2 is maximum.
Possible reasons include:

  • you have forgotten to configure (or configured wrong) builder.entries() number
  • same regarding other sizing Chronicle Hash configurations, most likely maxBloatFactor(), averageKeySize(), or averageValueSize()
  • keys, inserted into the ChronicleHash, are distributed suspiciously bad. This might be a DOS attack
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2051) ~[guava-30.1-jre.jar:?]
    at com.google.common.cache.LocalCache.get(LocalCache.java:3951) ~[guava-30.1-jre.jar:?]
    ...
    Caused by: java.lang.IllegalStateException: ChronicleMap{name=GridDatasetInv, file=/usr/local/tomcat/content/thredds/cache/collection/GridDatasetInv.dat, identityHashCode=401224888}: Attempt to allocate #3 extra segment tier, 2 is maximum.
    Possible reasons include:
    ...
    at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155) ~[guava-30.1-jre.jar:?]
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045) ~[guava-30.1-jre.jar:?]
    ... 66 more

Radar server top-level collection catalog broken on TDS 5

Radar Server, at least getting the top-level catalog, is broken on 5. On our 4.6 machine the top-level radar catalog returns:

<catalog xmlns="http://www.unidata.ucar.edu/namespaces/thredds/InvCatalog/v1.0" xmlns:xlink="http://www.w3.org/1999/xlink" name="THREDDS Radar Server" version="1.0.7">
<service name="radarServer" serviceType="QueryCapability" base="/thredds/radarServer/"/>
<dataset name="Radar Data">
<catalogRef xlink:href="nexrad/level2/CCS039/dataset.xml" xlink:title="NEXRAD Level II Radar for Case Study CCS039" name="NEXRAD Level II Radar for Case Study CCS039"/>
<catalogRef xlink:href="nexrad/level2/IDD/dataset.xml" xlink:title="NEXRAD Level II Radar from IDD" name="NEXRAD Level II Radar from IDD"/>
<catalogRef xlink:href="nexrad/level3/CCS039/dataset.xml" xlink:title="NEXRAD Level III Radar for Case Study CCS039" name="NEXRAD Level III Radar for Case Study CCS039"/>
<catalogRef xlink:href="nexrad/level3/IDD/dataset.xml" xlink:title="NEXRAD Level III Radar from IDD" name="NEXRAD Level III Radar from IDD"/>
<catalogRef xlink:href="terminal/level3/IDD/dataset.xml" xlink:title="TDWR Level III Radar from IDD" name="TDWR Level III Radar from IDD"/>
</dataset>
</catalog>

on the currently deployed 5 snapshot machine that same catalog returns 0 bytes.

Applying styles to WMS with SLD

There was a recent update TDS WMS configuration to support SLD parameter #73. I would like to use this feature but it is not clear how it can be used.

Can someone (@ethanrd ?) help/point to any resources about using a SLD to define the style of a WMS on TDS?

I assume it is not as simple as copying an SLD with the same name as the WMS/.nc file? ๐Ÿค”

HTTP Access of CDMS3 objects?

  • unidata/thredds-docker:5.3
  • I have some objects that I want to make available in whole via http - I get a 404 when trying a link that should otherwise work.

ucar.unit.prefs removed from cdm

With Unidata/netcdf-java#34, ucar.unit.prefs was removed from cdm, as it now lives in uibase. However, the TDS uses PreferencesExt and XMLStore to store config catalog preferences (trackerNumber, nextCatId, numberCatalogs). I am digging around to see where these values actually get used.

Problems with NCSS and WMS in v5

Hello, I was doing some tests with the newly released v5 and I think i found some problems with the NCSS and WMS (embedded ncWMS).

Environment:
OS: Linux 5.4.0-88-generic x86_64 x86_64 x86_64 GNU/Linux
JDK: jdk-11.0.12_linux-x64
Tomcat: apache-tomcat-8.5.72
TDS: v5.0

How to reproduce the problem:

I installed the environment following the docs and added a simple file (cmems_forecast_20210101.zip) with 3 time-steps and 3 vertical levels (from CMEMS) to the catalog.xml with:

    <datasetRoot path="datasets" location="/usr/local/tds/datasets" />

    <dataset name="cmems" ID="cmems"
             serviceName="all"  urlPath="datasets/cmems/cmems_forecast_20210101.nc" dataType="Grid"/>

The new dataset was recognized just fine by the OpenDAP, HTTPServer and some other services, but there were some problems with the NCSS and WMS services.

NCSS

The geographical region shown in the map selection is not the same as the region in the grid. I tried some other files and the region extent never changed.
region

When using the "NetCDF Subset Service for Grids As Points":

  • querying only the 3D (time,lat,lon) variable "Sea surface height" the result was ok;
  • querying a 3D and a 4D variable, only the first level of the 4D variable was returned. Previous (v4.?) versions of thredds returned all vertical levels of the 4D variable in this case.
  • querying only a 4D variable failed with error IllegalArgumentException: Index out of range=3
  • querying a 2D variable (I did this with the crossSeamProjection.nc example file that comes with TDS) failed due to a forms time request ("Please fill out this field"), despite the file having no time dimesion.
    var2d

WMS (solved, see post bellow)

Sometime the WMS fails with the error:
wms2

while other times it seems to give a valid answer returning the Capabilities document:
wms

but even in this case the ncWMS/Godiva3 viewer fails to load the data
godiva
Note: I changed the Base Layer from "NaturalEarth WMS" to "NASA Blue Marble WMS" because the first seems to be offline for the last few days.

I also tried to use the getCapabilites response in QGIS but it also failed to be recognized:
qgis

Thank you.

Feature Collections and missing jars

THREDDS-5.4-SNAPSHOT from 05-13-2022
CentOS 7, AdoptOpenJDK 11.0.15+10, Tomcat 8.5.75

I am attempting to upgrade an extensive collection of catalogs from version THREDDS 4.6.20. The first time I started THREDDS 5.4 catalina.out complained the following .jar files were missing:

  1. stax-api-1.0.1.jar
  2. chronicle-analytics-2.21ea0.jar
  3. chronicle-core-2.21ea25.jar

I looked in webapps/thredds##5.4-SNAPSHOT/WEB-INF/lib and indeed those files are missing. It didn't stop THREDDS from loading but I noticed that none of my FMRC datasets were loading correctly. I checked fmrc.log and found several errors like the following:

[2022-05-26T16:22:36.103-0400] INFO  thredds.featurecollection.InvDatasetFeatureCollection: FeatureCollection added = FeatureCollectionConfig name ='Averages' collectionName='doppio_2017_da_avg' type='FMRC'
  spec='/home/om/dods-data/thredds/roms/doppio/2017_da/avg/doppio_avg_#yyyyMMdd_HHmm#.*\.nc$'
  timePartition =directory
  updateConfig =UpdateConfig{userDefined=true, recheckAfter='null', rescan='null', triggerOk=true, updateType=test}
  tdmConfig =UpdateConfig{userDefined=false, recheckAfter='null', rescan='null', triggerOk=true, updateType=test}
  ProtoConfig{choice=Latest, change='null', param='null', outerNcml='[Element: <netcdf [Namespace: http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2]/>]', cacheAll=true}
  hasInnerNcml =false
  fmrcConfig =FmrcConfig: regularize=false datasetTypes=[TwoD, Files, Runs, ConstantOffsets]best = (Best, 25.000000)

[2022-05-26T16:24:37.608-0400] ERROR ucar.nc2.ft.fmrc.Fmrc: makeFmrcInv
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: ChronicleMap{name=GridDatasetInv, file=/opt/tds/servers/thredds_test/content/thredds/cache/collection/GridDatasetInv.da
t, identityHashCode=667994454}: Attempt to allocate #3 extra segment tier, 2 is maximum.
Possible reasons include:
 - you have forgotten to configure (or configured wrong) builder.entries() number
 - same regarding other sizing Chronicle Hash configurations, most likely maxBloatFactor(), averageKeySize(), or averageValueSize()
 - keys, inserted into the ChronicleHash, are distributed suspiciously bad. This might be a DOS attack
        at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2051) ~[guava-30.1-jre.jar:?]
        at com.google.common.cache.LocalCache.get(LocalCache.java:3951) ~[guava-30.1-jre.jar:?]
        at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4848) ~[guava-30.1-jre.jar:?]
        at ucar.nc2.ft.fmrc.GridDatasetInv.open(GridDatasetInv.java:62) ~[cdm-core-5.5.3-SNAPSHOT.jar:5.5.3-SNAPSHOT]
        at ucar.nc2.ft.fmrc.Fmrc.makeFmrcInv(Fmrc.java:287) [cdm-core-5.5.3-SNAPSHOT.jar:5.5.3-SNAPSHOT]
        at ucar.nc2.ft.fmrc.Fmrc.update(Fmrc.java:222) [cdm-core-5.5.3-SNAPSHOT.jar:5.5.3-SNAPSHOT]
        at thredds.featurecollection.InvDatasetFcFmrc.updateCollection(InvDatasetFcFmrc.java:140) [classes/:5.4-SNAPSHOT]
        at thredds.featurecollection.InvDatasetFeatureCollection.checkState(InvDatasetFeatureCollection.java:280) [classes/:5.4-SNAPSHOT]
        at thredds.featurecollection.InvDatasetFcFmrc.makeCatalog(InvDatasetFcFmrc.java:175) [classes/:5.4-SNAPSHOT]
        at thredds.core.CatalogManager.makeDynamicCatalog(CatalogManager.java:118) [classes/:5.4-SNAPSHOT]
        at thredds.core.CatalogManager.getCatalog(CatalogManager.java:75) [classes/:5.4-SNAPSHOT]
        at thredds.server.catalogservice.CatalogServiceController.handleRequest(CatalogServiceController.java:55) [classes/:5.4-SNAPSHOT]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
        at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) [spring-web-5.3.19.jar:5.3.19]
        at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) [spring-web-5.3.19.jar:5.3.19]
        at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) [spring-webmvc-5.3.19.jar:5.3.19]
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) [spring-webmvc-5.3.19.jar:5.3.19]
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) [spring-webmvc-5.3.19.jar:5.3.19]
        at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) [spring-webmvc-5.3.19.jar:5.3.19]
        at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) [spring-webmvc-5.3.19.jar:5.3.19]
        at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) [spring-webmvc-5.3.19.jar:5.3.19]
        at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) [spring-webmvc-5.3.19.jar:5.3.19]
        at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) [spring-webmvc-5.3.19.jar:5.3.19]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:655) [servlet-api.jar:?]
        at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) [spring-webmvc-5.3.19.jar:5.3.19]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:764) [servlet-api.jar:?]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [catalina.jar:8.5.75]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.75]
        at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat-websocket.jar:8.5.75]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.75]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.75]
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:327) [spring-security-web-5.6.1.jar:5.6.1]
        at thredds.servlet.filter.RequestBracketingLogMessageFilter.doFilter(RequestBracketingLogMessageFilter.java:50) [classes/:5.4-SNAPSHOT]
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) [spring-security-web-5.6.1.jar:5.6.1]
        at thredds.servlet.filter.RequestQueryFilter.doFilter(RequestQueryFilter.java:90) [classes/:5.4-SNAPSHOT]
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) [spring-security-web-5.6.1.jar:5.6.1]
        at thredds.servlet.filter.HttpHeadFilter.doFilter(HttpHeadFilter.java:47) [classes/:5.4-SNAPSHOT]
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) [spring-security-web-5.6.1.jar:5.6.1]
        at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:211) [spring-security-web-5.6.1.jar:5.6.1]
        at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:183) [spring-security-web-5.6.1.jar:5.6.1]
        at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:354) [spring-web-5.3.19.jar:5.3.19]
        at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:267) [spring-web-5.3.19.jar:5.3.19]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.75]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.75]
        at org.apache.logging.log4j.web.Log4jServletFilter.doFilter(Log4jServletFilter.java:71) [log4j-web-2.17.1.jar:2.17.1]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.75]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.75]
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:196) [catalina.jar:8.5.75]
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) [catalina.jar:8.5.75]
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542) [catalina.jar:8.5.75]
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) [catalina.jar:8.5.75]
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [catalina.jar:8.5.75]
        at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:698) [catalina.jar:8.5.75]
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) [catalina.jar:8.5.75]
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:366) [catalina.jar:8.5.75]
        at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:639) [tomcat-coyote.jar:8.5.75]
        at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) [tomcat-coyote.jar:8.5.75]
        at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:847) [tomcat-coyote.jar:8.5.75]
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1680) [tomcat-coyote.jar:8.5.75]
        at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-coyote.jar:8.5.75]
        at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) [tomcat-util.jar:8.5.75]
        at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) [tomcat-util.jar:8.5.75]
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-util.jar:8.5.75]
        at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.lang.IllegalStateException: ChronicleMap{name=GridDatasetInv, file=/opt/tds/servers/thredds_test/content/thredds/cache/collection/GridDatasetInv.dat, identityHashCode=667994454}: Attempt to allocate
#3 extra segment tier, 2 is maximum.
Possible reasons include:
 - you have forgotten to configure (or configured wrong) builder.entries() number
 - same regarding other sizing Chronicle Hash configurations, most likely maxBloatFactor(), averageKeySize(), or averageValueSize()
 - keys, inserted into the ChronicleHash, are distributed suspiciously bad. This might be a DOS attack
        at net.openhft.chronicle.hash.impl.VanillaChronicleHash.allocateTier(VanillaChronicleHash.java:822) ~[chronicle-map-3.21ea6.jar:3.21ea6]
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.nextTier(CompiledMapQueryContext.java:3131) ~[chronicle-map-3.21ea6.jar:3.21ea6]
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.alloc(CompiledMapQueryContext.java:3492) ~[chronicle-map-3.21ea6.jar:3.21ea6]
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.initEntryAndKey(CompiledMapQueryContext.java:3510) ~[chronicle-map-3.21ea6.jar:3.21ea6]
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.putEntry(CompiledMapQueryContext.java:4003) ~[chronicle-map-3.21ea6.jar:3.21ea6]
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.doInsert(CompiledMapQueryContext.java:4192) ~[chronicle-map-3.21ea6.jar:3.21ea6]
        at net.openhft.chronicle.map.MapEntryOperations.insert(MapEntryOperations.java:153) ~[chronicle-map-3.21ea6.jar:3.21ea6]
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.insert(CompiledMapQueryContext.java:4115) ~[chronicle-map-3.21ea6.jar:3.21ea6]
        at net.openhft.chronicle.map.MapMethods.put(MapMethods.java:88) ~[chronicle-map-3.21ea6.jar:3.21ea6]
        at net.openhft.chronicle.map.VanillaChronicleMap.put(VanillaChronicleMap.java:856) ~[chronicle-map-3.21ea6.jar:3.21ea6]
        at thredds.featurecollection.cache.GridInventoryCacheChronicle.put(GridInventoryCacheChronicle.java:89) ~[classes/:5.4-SNAPSHOT]
        at ucar.nc2.ft.fmrc.GridDatasetInv$GenerateInv.call(GridDatasetInv.java:112) ~[cdm-core-5.5.3-SNAPSHOT.jar:5.5.3-SNAPSHOT]
        at ucar.nc2.ft.fmrc.GridDatasetInv$GenerateInv.call(GridDatasetInv.java:68) ~[cdm-core-5.5.3-SNAPSHOT.jar:5.5.3-SNAPSHOT]
        at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4853) ~[guava-30.1-jre.jar:?]
        at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) ~[guava-30.1-jre.jar:?]
        at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) ~[guava-30.1-jre.jar:?]
        at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155) ~[guava-30.1-jre.jar:?]
        at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045) ~[guava-30.1-jre.jar:?]
        ... 66 more

I tried symlinking the chronical-core that is included with THREDDS (ln -s chronicle-core-2.21ea28.jar chronicle-core-2.21ea25.jar) and downloading the other two missing jars to webapps/thredds##5.4-SNAPSHOT/WEB-INF/lib. I restarted the tomcat instance and the catalina.out no longer issued the warnings but it made no difference with my FMRC datasets; the same errors appear in the fmrc.log.

I then stripped out all other datasets and made my entire catalog only one FMRC with 9920 files. This failed in the exact same way. However, when I started with a smaller number of files then added month by month, sending a trigger to update in between each addition, the FMRC dataset was created successfully. I was able to build the dataset by adding about 1 year (~2190 files) at a time with a trigger in between each add.

However, when I tried to add 2 years (~4380) then send a trigger, the error above reappeared. Once the error occurs, the trigger no longer has any effect and it will not try to rebuild the dataset so I can continue accessing the dataset without the 2 years of files I tried to add. Once I restart the tomcat instance, however, accessing the dataset throws the same errors in the fmrc.log and also throws errors in the web interface.

For this dataset I am activating TwoD, Runs, ConstantOffsets, Files, and Best. When I click the OpenDAP link for the Best or Forecast Model Run Collection (2D time coordinates) dataset I get the following error:

Error {
    code = 500;
    message = "null";
};

Clicking Forcast Model Run results in:

FileNotFound: test01/runs/catalog.xml

Clicking Constant Forecast Offset results in:

FileNotFound: test01/offset/catalog.xml

The Files dataset is the only one that works.

The catalog is as follows:

<?xml version="1.0" encoding="UTF-8"?>
<catalog name="DMCS Catalog"
         xmlns="http://www.unidata.ucar.edu/namespaces/thredds/InvCatalog/v1.0"
         xmlns:xlink="http://www.w3.org/1999/xlink">

   <service name="all" serviceType="Compound" base="">
      <service name="thisDODS" serviceType="OpenDAP" base="/thredds/dodsC/" />
      <service name="http" serviceType="HTTPServer" base="/thredds/fileServer/" />
      <service name="wms" serviceType="WMS" base="/thredds/wms/" />
      <service name="ncss" serviceType="NetcdfSubset" base="/thredds/ncss/grid/" />
   </service>

   <dataset name="ROMS doppio Real-Time Operational PSAS Forecast System Version 1 2017-present">
      <featureCollection name="Averages"
                         featureType="FMRC"
                         harvest="true"
                         path="test01">
         <metadata inherited="true">
            <serviceName>all</serviceName>
            <dataFormat>netCDF</dataFormat>
            <documentation type="summary">
               FMRC datasets for ROMS doppio real-time operational PSAS forecast system version 1 averages
            </documentation>
         </metadata>
         <collection spec="/path/to/data/tst/doppio_avg_#yyyyMMdd_HHmm#.*\.nc$"
                     name="doppio_2017_da_avg" />
         <update startup="test" trigger="allow" />
         <protoDataset choice="Latest">
            <netcdf xmlns="http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2" location="nc/cldc.mean.nc" />
         </protoDataset>
         <fmrcConfig regularize="false" datasetTypes="TwoD Runs ConstantOffsets Files" >
            <dataset name="Best" offsetsGreaterEqual="25" />
         </fmrcConfig>
      </featureCollection>

   </dataset>

</catalog>

wrong URL under OPenDAP page

Hello,

I'm running Thredds 5.0.0-beta5 on a docker container, using apache2 as a reverse proxy. You can find the service at this address.
As you can see under the OPenDAP page for one of the files, the Data URL seems to be the one internal to the docker container. How can I fix this?

Thanks!

NCSS fails on some datasets

THREDDS-5.4-SNAPSHOT from 05-13-2022
CentOS 7, AdoptOpenJDK 11.0.15+10, Tomcat 8.5.75

I have the same dataset being served by both 5.4 snapshot from May 13th and from version 4.6.20 on the same physical machine using different tomcat instances (and java). The 4.6.20 is using jdk8u322-b06. NetcdfSubset on 5.4 works fine for other datasets in my catalogs so I don't think it's a configuration issue. The NetcdfSubset link for a particular dataset (thredds/ncss/grid/roms/doppio/DopAnV2R3-ini2007_da/mon_ens_means/dataset.html on both servers) works fine in 4.6.20 but throws the following error with 5.4 in the browser:

FileNotFound: Not a Grid Dataset roms/doppio/DopAnV2R3-ini2007_da/mon_ens_means err=/home/om/dods-data/thredds/roms/doppio/DopAnV2R3-ini2007_da/mon_ens_means (No such file or directory)

and the following in threddsServlet.log:

2022-06-16T14:52:09.038 -0400 [1137939686][     597] INFO  - threddsServlet - Remote host: 192.168.10.15 - Request: "GET /thredds/ncss/grid/roms/doppio/DopAnV2R3-ini2007_da/mon_ens_means/dataset.html HTTP/1.1"
2022-06-16T14:52:09.088 -0400 [1137939736][     597] WARN  - thredds.server.TdsErrorHandling - TDS Error
java.io.FileNotFoundException: Not a Grid Dataset roms/doppio/DopAnV2R3-ini2007_da/mon_ens_means err=/home/om/dods-data/thredds/roms/doppio/DopAnV2R3-ini2007_da/mon_ens_means (No such file or directory)
        at thredds.core.DatasetManager.openCoverageDataset(DatasetManager.java:416) ~[classes/:5.4-SNAPSHOT]
        at thredds.core.TdsRequestedDataset.openAsCoverageDataset(TdsRequestedDataset.java:138) ~[classes/:5.4-SNAPSHOT]
        at thredds.core.TdsRequestedDataset.getCoverageCollection(TdsRequestedDataset.java:72) ~[classes/:5.4-SNAPSHOT]
        at thredds.server.ncss.controller.NcssGridController.getDatasetDescriptionHtml(NcssGridController.java:232) ~[classes/:5.4-SNAPSHOT]
        at thredds.server.ncss.controller.NcssGridController.getGridDatasetDescriptionHtml(NcssGridController.java:219) ~[classes/:5.4-SNAPSHOT]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
        at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.20.jar:5.3.20]
        at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.20.jar:5.3.20]
        at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.20.jar:5.3.20]
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.20.jar:5.3.20]
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.20.jar:5.3.20]
        at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.20.jar:5.3.20]
        at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) [spring-webmvc-5.3.20.jar:5.3.20]
        at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) [spring-webmvc-5.3.20.jar:5.3.20]
        at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) [spring-webmvc-5.3.20.jar:5.3.20]
        at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) [spring-webmvc-5.3.20.jar:5.3.20]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:655) [servlet-api.jar:?]
        at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) [spring-webmvc-5.3.20.jar:5.3.20]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:764) [servlet-api.jar:?]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [catalina.jar:8.5.75]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.75]
        at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat-websocket.jar:8.5.75]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.75]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.75]
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:327) [spring-security-web-5.6.5.jar:5.6.5]
        at thredds.servlet.filter.RequestBracketingLogMessageFilter.doFilter(RequestBracketingLogMessageFilter.java:50) [classes/:5.4-SNAPSHOT]
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) [spring-security-web-5.6.5.jar:5.6.5]
        at thredds.servlet.filter.RequestQueryFilter.doFilter(RequestQueryFilter.java:90) [classes/:5.4-SNAPSHOT]
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) [spring-security-web-5.6.5.jar:5.6.5]
        at thredds.servlet.filter.HttpHeadFilter.doFilter(HttpHeadFilter.java:47) [classes/:5.4-SNAPSHOT]
        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:336) [spring-security-web-5.6.5.jar:5.6.5]
        at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:211) [spring-security-web-5.6.5.jar:5.6.5]
        at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:183) [spring-security-web-5.6.5.jar:5.6.5]
        at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:354) [spring-web-5.3.20.jar:5.3.20]
        at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:267) [spring-web-5.3.20.jar:5.3.20]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.75]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.75]
        at org.apache.logging.log4j.web.Log4jServletFilter.doFilter(Log4jServletFilter.java:71) [log4j-web-2.17.1.jar:2.17.1]
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [catalina.jar:8.5.75]
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [catalina.jar:8.5.75]
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:196) [catalina.jar:8.5.75]
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) [catalina.jar:8.5.75]
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542) [catalina.jar:8.5.75]
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) [catalina.jar:8.5.75]
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [catalina.jar:8.5.75]
        at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:698) [catalina.jar:8.5.75]
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) [catalina.jar:8.5.75]
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:366) [catalina.jar:8.5.75]
        at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:639) [tomcat-coyote.jar:8.5.75]
        at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) [tomcat-coyote.jar:8.5.75]
        at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:847) [tomcat-coyote.jar:8.5.75]
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1680) [tomcat-coyote.jar:8.5.75]
        at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-coyote.jar:8.5.75]
        at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) [tomcat-util.jar:8.5.75]
        at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) [tomcat-util.jar:8.5.75]
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-util.jar:8.5.75]
        at java.lang.Thread.run(Thread.java:829) [?:?]

The datasets that work correctly on 5.4 have more metadata and grid information. My question is: What additional info needs to be in a dataset for NetcdfSubset to work in THREDDS 5.4 as compared to 4.6.20?

S3 Aggregation Best Practice?

I'm testing with:
unidata/thredds-docker:5.3

I have a "content root":
TDS_CONTENT_ROOT_PATH=/mnt/content

And have mounted TDS configuration files there. Otherwise, I'm working in a stock TDS.

I have a /mnt/thredds that I use to store configuration files. I think of this as the "data root".

Side note:
In our non-AWS production system, /mnt/thredds also houses all our netcdf data content, some of which are joinExisting aggregations of many thousands of files, some of which also union a collection of joinExisting aggregations such that we have many variables in one OPeNDAP endpoint. e.g. https://cida.usgs.gov/thredds/dodsC/loca_future.html

In my testing for migration to S3, I am trying to work out what the best configuration pattern will be to get performance out of these data.

I've determined that a catalog/ncml pattern like either of the following will function and result in a joinExisting cache that contains the indices of the joinExisting dimension:

  <dataset name="test prism agg regexp (works but SLOW?)" ID="prism-agg-3" urlPath="prism-agg/agg3" dataType="Grid" serviceName="all">
    <ncml:netcdf xmlns="http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2" id="prism-agg-3">
      <aggregation dimName="time" type="joinExisting" timeUnitsChange="true">
        <scan location="cdms3://default@aws/nhgf-development?thredds/prism_v2/#delimiter=/" regExp=".*nc"/>
      </aggregation>
    </ncml:netcdf>
  </dataset>

or

  <dataset name="test prism agg explicit" ID="prism-agg-2" urlPath="prism-agg/agg2" dataType="Grid" serviceName="all">
    <ncml:netcdf xmlns="http://www.unidata.ucar.edu/namespaces/netcdf/ncml-2.2" id="prism-agg-2">
      <aggregation dimName="time" type="joinExisting" timeUnitsChange="true">
        <netcdf location="cdms3://default@aws/nhgf-development?thredds/prism_v2/prism_2019.nc"/>
        <netcdf location="cdms3://default@aws/nhgf-development?thredds/prism_v2/prism_2020.nc"/>
      </aggregation>
    </ncml:netcdf>s
  </dataset>

Now to the crux of my issue. I'm getting very slow responses when querying these aggregations for coordinate variables which really need to be snappy.

In contrast to the stand along catalog dataset elements above, I currently have my catalog and ncml layed out using three tiers that help separate concerns.

  1. xml THREDDS catalog metadata containing a netcdf element with a location attribute pointing to an ncml file.
  2. The ncml file contains dataset metadata and a netcdf element with a location attribute pointing to
  3. an ncml aggregation file with the contents of the xml snips above.

My thought is to modify my ncml in the third tier so that, rather than pointing to data adjacent to it in a traditional file system, it points to a cdms3 location either using a scan or using explicit object paths.

But then I still run up against this issue where my reads of coordinate variables are super slow. In this particular case it is because the time bounds variable is not cached and has to be retrieved to read in the file. In other cases, I would really like to avoid scanning all the data to populate a cache in the first place.

So I guess I have three questions.

  1. Is this three-tiered catalog metadata -> ncml metadata -> ncml aggregation layout going to cause us issues with caching or in other ways I'm not aware of?
  2. Is it possible to pre-cache coordinate variables in joinExisting or other ncml collections? Where would I go to find information about that?
  3. long shot, but I've never used the TDM -- should I pursue deploying that for its utility in managing the caches?

Thanks -- and please tell me tho RTFM if one exists!

HTTP HEAD requests on thredds/fileServer seem to read entire file, only to report the file size.

To report a non-security related issue, please provide:

  • the version of the software with which you are encountering an issue and environmental information (i.e. Operating System, compiler info, java version, python version, etc.)
$ catalina.sh version
...
Server version: Apache Tomcat/8.5.39
Server built:   Mar 14 2019 11:24:26 UTC
Server number:  8.5.39.0
OS Name:        Linux
OS Version:     3.10.0-1062.1.1.el7.x86_64
Architecture:   amd64
JVM Version:    1.8.0_222-b10
JVM Vendor:     Oracle Corporation
...
$ uname -a
Linux [hostname] 3.10.0-1062.1.1.el7.x86_64 #1 SMP Tue Aug 13 18:39:59 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

$ cat thredds/META-INF/MANIFEST.MF
Manifest-Version: 1.0
Implementation-Title: THREDDS Data Server (TDS)
Implementation-Version: 5.0.0-beta5
Built-By: @lesserwhirls lesserwhirls
Implementation-Vendor-Id: edu.ucar
Implementation-URL: https://www.unidata.ucar.edu/software/thredds/curr
 ent/tds/TDS.html
Created-By: Gradle 3.5.1
Build-Jdk: 1.8.0_171
Built-On: 2018-09-10T18:14:26-0600
Implementation-Vendor: UCAR/Unidata
  • a description of the issue with the steps needed to reproduce it

Performing an HTTP HEAD request on thredds/fileServer seem to read the entire file, just to report its size. Is this intentional? It can place significant load on the server.

Looking around a little, it looks like this may be the relevant code:

https://github.com/Unidata/tds/blob/v5.0.0-beta7/tds/src/main/java/thredds/servlet/filter/HttpHeadFilter.java

If you have a general question about the software, please view our Suggested Support Process.

NetcdfSubset err=Could not open as Coverage point/grid service not working

version: unidata/thredds-docker:5.0

Services working : Opendap, DAP4,http,ISO,NCML.CDMremote.

Service not working : NetcdfSubset : Either grid or point throwing error

Sample file for testing: testfile.zip

Strange thing I noticed is with version : unidata/thredds-docker:4.6.17 thredds/ncss/grid is working as expected can someone put light on this issue ?

$  ferret
 	NOAA/PMEL TMAP
 	PyFerret v7.63 (optimized)
 	Linux 4.15.0-1096-azure - 10/13/20
 	16-Oct-21 22:35     

yes? use testfile.nc
yes? sh d
     currently SET data sets:
    1> ./testfile.nc  (default)
 name     title                             I         J         K         L
 U        U_1205                           1:1       1:1       1:10      1:20
 V        V_1206                           1:1       1:1       1:10      1:20
 
yes? sh g u
    GRID GNL1
 name       axis              # pts   start                end                 subset
 LON       LONGITUDE            1 r   72.744E(72.744)      72.744E(72.744)     full
 LAT       LATITUDE             1 r   15.166N              15.166N             full
 DEPTH1_10 DEPTH (m)           10 r-  -101.42              -65.421             full
 TIME      TIME                20 r   03-OCT-2017 16:00    04-OCT-2017 11:00   full


$ ncdump -h testfile.nc 
netcdf testfile {
dimensions:
	LON = 1 ;
	LAT = 1 ;
	DEPTH1_10 = 10 ;
	TIME = UNLIMITED ; // (20 currently)
variables:
	float LON(LON) ;
		LON:long_name = "Longitude" ;
		LON:units = "degrees_east" ;
		LON:point_spacing = "even" ;
		LON:axis = "X" ;
		LON:standard_name = "longitude" ;
	float LAT(LAT) ;
		LAT:long_name = "Latitude" ;
		LAT:units = "degrees_north" ;
		LAT:point_spacing = "even" ;
		LAT:axis = "Y" ;
		LAT:standard_name = "latitude" ;
	float DEPTH1_10(DEPTH1_10) ;
		DEPTH1_10:long_name = "Depth (m)" ;
		DEPTH1_10:units = "meters" ;
		DEPTH1_10:positive = "down" ;
		DEPTH1_10:point_spacing = "even" ;
		DEPTH1_10:axis = "Z" ;
		DEPTH1_10:standard_name = "depth" ;
	float TIME(TIME) ;
		TIME:long_name = "Time" ;
		TIME:units = "hours since 2017-10-03 16:00:00" ;
		TIME:time_origin = "03-OCT-2017 16:00:00" ;
		TIME:axis = "T" ;
		TIME:standard_name = "time" ;
	double U(TIME, DEPTH1_10, LAT, LON) ;
		U:missing_value = -1.e+34 ;
		U:_FillValue = -1.e+34 ;
		U:long_name = "U_1205" ;
		U:history = "From test" ;
	double V(TIME, DEPTH1_10, LAT, LON) ;
		V:missing_value = -1.e+34 ;
		V:_FillValue = -1.e+34 ;
		V:long_name = "V_1206" ;
		V:history = "From test" ;

// global attributes:
		:history = "PyFerret V7.63 (optimized) 16-Oct-21" ;
		:Conventions = "CF-1.6" ;
}

Error Log

2021-10-16T17:07:39.658 +0000 [   1937647][     245] INFO  - threddsServlet - Request Completed - 302 - -1 - 2
2021-10-16T17:07:41.631 +0000 [   1939620][     246] INFO  - threddsServlet - Remote host: 172.31.0.1 - Request: "GET /thredds/ncss/grid/test/data/testfile.nc/dataset.html HTTP/1.1"
2021-10-16T17:07:41.708 +0000 [   1939697][     246] WARN  - thredds.server.TdsErrorHandling - TDS Error
java.io.FileNotFoundException: Not a Grid Dataset test/data/testfile.nc err=Could not open as Coverage: /test/testfile.nc
	at thredds.core.DatasetManager.openCoverageDataset(DatasetManager.java:416) ~[classes/:5.0]
	at thredds.core.TdsRequestedDataset.openAsCoverageDataset(TdsRequestedDataset.java:138) ~[classes/:5.0]
	at thredds.core.TdsRequestedDataset.getCoverageCollection(TdsRequestedDataset.java:72) ~[classes/:5.0]
	at thredds.server.ncss.controller.NcssGridController.getDatasetDescriptionHtml(NcssGridController.java:232) ~[classes/:5.0]
	at thredds.server.ncss.controller.NcssGridController.getGridDatasetDescriptionHtml(NcssGridController.java:219) ~[classes/:5.0]
	at jdk.internal.reflect.GeneratedMethodAccessor92.invoke(Unknown Source) ~[?:?]
	at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
	at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
	at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:197) ~[spring-web-5.3.7.jar:5.3.7]
	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:141) ~[spring-web-5.3.7.jar:5.3.7]
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeA

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.