Working Group Overview
The FDA/PhUSE collaboration was created to provide high-quality quantitative analysis of information on product safety, effectiveness and quality. The core infrastructure for FDA's scientific community will help support ongoing efforts in pre-market development, modernization of drug review, post-market safety and drug quality. Key projects within the FDA's Computational Science Center are in areas of developing data standards and expanding the use of electronic review tools.
The collaboration was previously organized into 6 Working Groups:
- Working Group 1 - Data Validation and Quality Assessment
- Working Group 2 - Reducing Risk within the Inspection Site Selection Process
- Working Group 3 - Challenges of Integrating and Converting Data across Studies
- Working Group 4 - Standards Implementation Issues with the CDISC Data Models
- Working Group 5 - Development of Standard Scripts for Analysis and Programming
- Working Group 6 - Non-Clinical Road-map and Impacts on Implementation
Each of the working group identified their own subgroups, projects, goals and timelines.
To maximize efficiency and organization, the groups will be re-organized into fewer working groups with broader focus. Each working group will have specific, well-defined projects. The working groups that will be discussed at the March meeting are described below. In addition to the projects lists, the group will have open discussions about other areas of issues and gaps that should be addressed.
(NEW) Emerging TechnologiesNew challenges in regulatory science and drug, biologic, and device development provide new opportunities for recognizing and leveraging emerging technologies and computational tools. In recognition of this need, The FDA/PhUSE Computational Science Symposium (CSS) will use the March, 2013 meeting as an opportunity to “launch” a new working group that provides a forum for determining interest in specific computational science topics, tools, technologies, and approaches.
This emerging technologies working group will be an open, transparent forum for sharing pre-competitive means of applying new technologies and is being challenged with creation of well-defined collaborative projects that will describe, prioritize, assess, and assist advancement of these opportunities. Possible topics include (but are not limited to) semantic web applications, analysis metadata, modeling, simulation, and “The Cloud”. Projects incorporating these topics might include prioritization, development, and piloting for feasibility and value.
Improving Data QualityThis working group will focus on collaborating to develop a robust
process to rapidly validate and assess data quality as data moves
through product lift cycle across both industry and regulatory review.
The group will discuss current pains and potential solutions regarding
topics such as current data validation rules, therapeutic specific data
issues and improving the quality of the data to support specific
SDTM Validation Rules
The purpose of this project is to continue the effort of improving SDTM validation rules, ensuring that all rules
are both clear and effective for use by industry and by FDA. The goal is
to determine new rules that will add value and manage rules that need
clarification. This will involve coordinating with CDISC, OpenCDISC and
FDA to propose solutions.
Optimizing the use of Data StandardsThe development and adoption of data standards over the last decade has
shown significant promise in improving efficiencies in the product
submission and review process.
However, there have also been gaps,
issues and challenges in the interpretation and use of the standards.
This group will identify specific gaps preventing FDA and industry from
optimizing the use of standards and collaborate to close those gaps.
Study Data Reviewer’s Guide (SDRG)
define.xml document does not adequately document mapping decisions,
sponsor-defined domains, and other key study components and a SDRG would help
to address this documentation gap. The
goal of this project is to develop a SDRG template jointly between CDER,
Industry, and CDISC to be used for submissions.
Traceability and Data Flow of Study Level Data
challenges arise at a study level when raw data is converted to SDTM
after the fact, while analysis datasets and the study report trace back
to the original raw data source. This project will discuss and define
traceability considerations and best practices for study level dataset
conversion for a variety of different data flow scenarios.
(NEW) CDRH Pilot for the Electronic Submission of Medical Device
Data in an SDTM-Based Format
With the recent
development of new SDTM-based device domains, industry and CDRH would like to
conduct a Pilot to evaluate the use of the domains for submission.
(NEW) Analysis Data Reviewer’s Guide (ADRG)
ADaM “provides a framework that enables analysis of the data, while
at the same time allowing reviewers and other recipients of the data to have a
clear understanding of the data’s lineage from collection to analysis to
results. Although ADaM provides a robust
metadata framework, FDA Reviewers benefit from additional, human-readable,
documentation of analysis methods, datasets, and programs that cannot be fully
explained within the ADaM metadata. The
development of an Analysis Data Reviewer’s Guide (ADRG) template will ensure this
documentation is provided to the agency in a consistent and usable format.
(NEW) SDTM Exceptions
data elements to support operational activities such as data cleaning or data
reconciliation. Although these data
elements are not analyzed, sponsors frequently tabulate them in SDTM. As a result, both sponsor analysts and FDA
Reviewers spend time differentiating analyzable observations from operational
noise. Documenting data elements of
limited utility to data analysis and/or FDA Reviewers provides sponsors and the
agency with a common baseline for pre-submission data standards
Development of Standard Scripts for Analysis and ProgrammingWith the development and implementation of industry data standards,
there is a great opportunity to develop standard reporting across
industry and to support the needs of FDA medical and statistical
This working group will identify potential standard scripts
for data transformations and analyses across an within therapeutic
areas. The goal will be to begin the process of standardizing analyses
across the industry, but also include examples of what can be done with a
standardized data set.
Standard Script Repository
is currently developing an open source repository to develop, share, and
validate scripts. The group is working
to bring together scripts, tools, and programs into a single repository,
establish the basic structure and management of the repository, and create
template and process for creation, review, and validation of the tools.
Standard Scripts Development
project includes the development of approximately six white papers that provide
recommended Tables, Figures, and Listings for clinical trial study reports and
submission documents. The intent is to
begin the process of developing industry standards with respect to analysis and
reporting for measurements that are common across clinical trials and across
therapeutic areas.In addition, the
project will develop a communication plan that conceptualizes efficient ways to
communicate working group progress and results, e.g. white papers, and the call
for scripts. It will define target groups, timing, communication channels, and
Non-Clinical Road-map and Impacts on ImplementationThere is a need to improve nonclinical assessment and regulatory science
by identifying key needs and challenges in the field and then establish
an innovative framework for addressing them in a collaborative manner.
The group created a framework for moving certain projects forward to
support nonclinical informatics efforts and to develop specific
implementation solutions and SEND.
Nonclinical Data Interconnectivity
This project is currently
developing use cases for performing valid human to non-human endpoint
prediction modeling piloting.
This project is developing recommendations for high priority
projects regarding the use of historical control data. These recommendations
include identification of areas where additional aggregation or analysis of
historical control data may be useful or identification of data standards to
develop and pilot for exchange of historical data.
The responsibilities for creating the SEND files for a study is
often shared across organizations. Clarity is needed on how these responsibilities
can be effectively managed. This project will develop a framework to classify
and prioritize scenarios in which data from multiple organizations needs to be
aggregated and work together to find solutions to challenges with these
scenarios when there are protocol amendments, audits that trigger data changes,
changes to the SEND standard, etc.
Implementation User Group
The SEND Implementation Wiki is a
knowledge base on implementing SEND - a series of articles designed to help you
understand and implement SEND, including modeling basics, useful links, FAQs,
and more. In addition to the wiki, a companion SEND Implementation Forum is
being developed by the project for future release, to provide a place for
active discussion and to ask questions not already answered in the wiki.
the SEND standard allows for the submission of general tox and carc studies in
electronic standardized format. However, there are additional study types that
are generally received that have not been standardized. This project is
developing a strategy on how to prioritize, maximize, and transform
standardization efforts. This plan will take into account resources,
complexity, timeline and new approaches and technologies.