Grant Details
Description
Component type: This Knowledge Provider project will continue and significantly extend work
done by the Translator Consortium Blue Team, focusing on deriving knowledge from real-world
data through complex analytic workflows, integrated to the Translator Knowledge Graph, and
served via tools like Big GIM and the Translator Standard API.
The problem: We aim to solve the “first mile” problem of translational research: how to
integrate the multitude of dynamic small-to-large data sets that have been produced by the
research and clinical communities, but that are in different locations, processed in different
ways, and in a variety of formats that may not be mutually interoperable. Integrating these data
sets requires significant manual work downloading, reformatting, parsing, indexing and
analyzing each data set in turn. The technical and ethical challenges of accessing diverse
collections of big data, efficiently selecting information relevant to different users’ interests, and
extracting the underlying knowledge are problems that remain unsolved. Here, we propose to
leverage lessons distilled from our previous and ongoing big data analysis projects to develop a
highly automated tool for removing these bottlenecks, enabling researchers to analyze and
integrate many valuable data sets with ease and efficiency, and making the data FAIR [1].
Plan: (AIM 1) We will analyze and extract knowledge from rich real-world biomedical data sets
(listed in the Resources page) in the domains of wellness, cancer, and large-scale clinical
records. (AIM 2) We will formalize methods from Aim 1 to develop DOCKET, a novel tool for
onboarding and integrating data from multiple domains. (AIM 3) We will work with other teams
to adapt DOCKET to additional knowledge domains. ■ The DOCKET tool will offer 3 modules:
(1) DOCKET Overview: Analysis of, and knowledge extraction from, an individual data set. (2)
DOCKET Compare: Comparing versions of the same data set to compute confidence values,
and comparing different data sets to find commonalities. (3) DOCKET Integrate: Deriving
knowledge through integrating different data sets. ■ Researchers will be able to parameterize
these functions, resolve inconsistencies, and derive knowledge through the command line,
Jupyter notebooks, or other interfaces as specified by Translator Standards. ■ The outcome will
be a collection of nodes and edges, richly annotated with context, provenance and confidence
levels, ready for incorporation into the Translator Knowledge Graph (TKG). ■ All analyses and
derived knowledge will be stored in standardized formats, enabling querying through the
Reasoner Std API and ingestion into downstream AI assisted machine learning. ■ Example
questions this will allow us to address include: (Wellness) Which clinical analytes, metabolites,
proteins, microbiome taxa, etc. are significantly correlated, and which changing analytes predict
transition to which disease? [2,3] (Cancer) Which gene mutations in any of X pathways are
associated with sensitivity or resistance to any of Y drugs, in cell lines from Z tumor types? (All
data sets) Which data set entities are similar to this one? Are there significant clusters? What
distinguishes between the clusters? What significant correlations of attributes can be observed?
How can this set of entities be expanded by adding similar ones? How do these N versions of
this data set differ, and how stable is each knowledge edge as the data set changes over time?
Collaboration strengths: Our team has extensive experience with biomedical and domainagnostic
data analytics, integrating multiple relevant data types: omics, clinical measurements
and electronic health records (EHRs). We have participated in large collaborative consortia and
have subject matter experts willing to advise on proper data interpretation. Our application
synergizes with those of other Translator teams (see Letters of Collaboration).
Challenges: Data can come in a bewildering diversity of formats. Our solution will be modular,
will address the most common formats first, and will leverage established technologies like
DataFrames and importers (like pandas.io) where possible. Mapping nodes and edge types
onto standard ontologies is crucial for knowledge integration; we will collaborate with the
Standards component to maximize success.
Status | Finished |
---|---|
Effective start/end date | 01/24/20 → 11/30/23 |
Funding
- National Center for Advancing Translational Sciences: $609,144.00
- National Center for Advancing Translational Sciences: $609,144.00
- National Center for Advancing Translational Sciences: $676,757.00
- National Center for Advancing Translational Sciences: $609,068.00
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.