124 lines
8.6 KiB
Plaintext
124 lines
8.6 KiB
Plaintext
%META:TOPICINFO{author="MichaelBrowne" date="1142989345" format="1.0" version="1.3"}%
|
||
%META:TOPICPARENT{name="AgadirMeeting"}%
|
||
Report for Checklist Working Group
|
||
|
||
Checklist working group discussed the elements to be included in the schema. During the break out sessions, discussion was concentrated on reported data. In this regard, fact sheets were not discussed but the group suggests that GISIN needs to keep a way to include them in the future.
|
||
|
||
|
||
Checklist Working Group identified important concepts that consistently appear in checklists and should be supported by the schema.
|
||
|
||
1. Species name
|
||
|
||
Species name (genus, species, “infraspecific categories”, author and date) is required as a string (more discussion needed RE string and separated elements). A synonym element should be included in the schema.
|
||
|
||
|
||
2. Location
|
||
|
||
See location elements in current schema.
|
||
|
||
Recommendation: decimal degrees are preferred for geographical coordinates. This could be an annotation to the geographic coordinates field.
|
||
|
||
|
||
3. Biostatus
|
||
|
||
See biostatus elements in current schema and recommendations from the TerminologyWorkingGroup.
|
||
|
||
|
||
4. Impacts
|
||
|
||
See impact elements in current schema and recommendations from the TerminologyWorkingGroup.
|
||
|
||
|
||
5. Population dynamic data
|
||
|
||
Population dynamics elements in the current schema may be revised in consultation with the TerminologyWorkingGroup to include the following elements:
|
||
|
||
 spatial extent;
|
||
|
||
 spatial density;
|
||
|
||
 means of spread;
|
||
|
||
 rate of spread;
|
||
|
||
 population rate of change;
|
||
|
||
 contact Rob Emery for additional elements.
|
||
|
||
|
||
6. Introduction and/or observation dates
|
||
|
||
Introduction and/or observations dates in the current schema may be revised to include the following elements.
|
||
|
||
 dates of successive introductions;
|
||
|
||
 date of first observation;
|
||
|
||
|
||
7. Pathways and dispersal
|
||
|
||
|
||
8. Use
|
||
|
||
See use element in the current schema.
|
||
|
||
|
||
9. Management
|
||
|
||
See management elements in the current schema.
|
||
|
||
|
||
10. Hosts
|
||
|
||
See hosts elements in the current schema.
|
||
|
||
|
||
11. Referencing/documenting information sources
|
||
|
||
Use objective criteria (ex. Peer reviewed literature, expert interviews, etc...) for reliability/validity statement, and date of record or last updated.
|
||
|
||
_Issues that the break out group did not have time to discuss include:_
|
||
|
||
|
||
12. Should there be a description container (e.g. measurements)
|
||
|
||
|
||
13. Habitat
|
||
|
||
-- Main.AnnieSimpson - 14 Mar 2006
|
||
|
||
|
||
--
|
||
Discussion about robust TimeStamping of IAS data:
|
||
|
||
MichaelBrowne
|
||
In order to sell GISIN to donors, we should emphasis the role GISIN can play in providing the data that will underpin the IAS indicators being developed to monitor progress on global biodiversity targets. We should pay particular attention to ‘time stamping’ of data in the schema so that a snapshot of e.g. spatial extent of all known aquatic IAS in 2006 can be compared to a snapshot of their spatial extent in 2010. Other data types potentially useful for IAS indicators include; Money being spent on eradication/control Ecosystem response to eradication/control activities Pressure indicators: e.g. Number of border intercepts/incursions Number of new organisms introduced per annum Number of IAS by country Number of countries with active control programmes. Good 'Time stamping' of IAS data is also vital to modelling.
|
||
|
||
AnnieSimpson
|
||
Your comment about the ability to consult the GISIN taking time into account is very important. Along with how invasive species arrive (pathways), when (a time stamp) will be vital to modeling and measures of management success. Do you have suggestions on how to keep these parameters in the fore? E.g., is a date stamp a required element? (I think the answer is no, but we then need to create fillters to mask data without timestamps).
|
||
|
||
JerryCooper
|
||
I think you need to consider carefully what exactly you are time-stamping. Do you want to use the data to create a time sequence for the acquisition of the information about the distribution of organisms, or the time data embedded within that information? I suggest the community wants the latter and not the former. I.e. is it 'date stamp' data associated with the original collection/observation of the primary data record (already done via DC/ABCD), is it the date of publication of a compiled list (already done via the literature reference data), or is it the date that information was acquired via a GISIN indexer? Only the first is really useful for any meaningful time-series analysis of spread, the second is marginally useful, and the latter is simply bookeeping.
|
||
|
||
MichaelBrowne
|
||
Yes, we must consider carefully what we are time-stamping and yes, it is the time data embedded within information that we are interested in for time-series analysis. As you say (for an IAS indicator of spatial extent) embedded time data can potentially be;
|
||
Date of observation (DC/ABCD)
|
||
Date of first observation (Sometimes stated explicitly in IAS community, but derived in DC/ABCD data)
|
||
Date of subsequent observations (DC/ABCD)
|
||
Date of introduction (Often stated explicitly in IAS community)
|
||
Date of publication of literature reference (e.g. date of publication of a compiled list)
|
||
…there maybe other dates of interest?
|
||
I guess the IAS indicators people could focus on seeking global DC/ABCD observation data for a few flagship IAS to establish their 2006 distributions, and then do it again in 2010, but it is likely that any increased spread revealed is partially due to increased availability of data. It is also likely that the 2006 global distribution of an IAS according to the literature is far more globally comprehensive than its 2006 distribution according to DC/ABCD observation data. Even though the time data embedded within the literature is less robust, this kind of data is probably the best available for global assessments.
|
||
|
||
AnnieSimpson
|
||
We've lived through the dangers of creating an ambiguous date stamp in metadata cataloguing tools here at NBII.
|
||
I agree it is essential to distinguish what kind of date any record has and that 'date record acquired' is practically useless.
|
||
But do we really need to distinguish between date of first observation (derived in DC/ABCD data) and date of introduction (not supported by existing standards)?
|
||
Michael, you request more date types of interest, but I feel that (in the interest of keeping the cataloguing effective/streamline) we should not dig too deeply for more possibilities, but rather (if possible) map any new types to the ones you mention here.
|
||
|
||
Bob Morris
|
||
Obviously I am neutral by reason of incompetence on the question of \which/ elements have dates associated with them.
|
||
However, the question of "date record acquired" or related issues is not as simple as it might appear. There are several reasons for this, the main one being cache maintenance. In general, I urge that data providers always offer some form of "guaranteeed good until" date (i.e. a promise not to update records before that date) so that applications (whether indexers or individual end-user applications) can have confidence that their current version of the record actually reflects what the data provider has. Most scientists tend to resist this on some principle that when a datum is known to need change they want it changed immediately in order to have the most up-to-date science available. I find that an odd view from people who are willing to tolerate months if not years for paper publications to appear, and who justify hoarding data so that the data gatherer can analyze it without being "scooped".
|
||
Worst of all, what absence of a "good until" really induces is that an application which in fact cares whether it has up to date information is then motivated to put more traffic against the original data server instead of caches and indexers. Since service is likely to be faster from those secondary services, most applications will use them. In turn, the consequence is that most applications will be ignorant of the fact that the currency of the data they have is not, in fact, in the control of the policy of the original provider, but rather in the control of the policy of the secondary, which may be capricious, useless, undocumented, irregular, ill-advised, or all of the above. And the client applications have no way to tell, nor any way to tell which record is best if two apparently similar records acquired from different secondaries should differ. (The later one is not automatically the better in the case that yet a third, unknown, revision is lurking at the original provider---which might even have rolled the record back to the earlier one!). IAS is an interesting special case since EDRR [early detection and rapid response] puts a high value on new occurrence records. But I don't know if checklists are the appropriate place to address this.
|
||
|