Data Analytics and Data Mining for Healthcare

Data analytics data mining for healthcare
Data analytics data mining for healthcare

Data analytics and data mining in healthcare has not been for the faint of heart.  There are massive amounts of disparite structured and unstructured  data. But there is value if you can simplify and extract knowledge from complexity.  So, how to start synthesizing different structured and unstructured data into knowledge?

Getting Started:

Healthcare Data Comes from Many Sources – Synergize it into Language You Understand

  • Healthcare insurance companies and providers have terabytes of data.  Thought it’s true that there are federal standards for electronic healthcare data, the structure of claim data that insurance companies maintain does not match the structure of clinical data in hospitals, labs, and physician offices.
  • Unstructured data such as hand-written orders, clinical documentation, Electronic Health Record discrete data, and EDI claims.
  • Structured and unstructured data are co-dependents in the world of proving medical necessity, for example, but this data is normally stored in dissimilar systems, where it is difficult to use.
  • The clinicians, coders, insurance claim reviewers, and executives who rely on structured and unstructured data do not think in terms of rows, columns, or documents. They think in terms of their organization’s objectives and results.
  • They need methods to inquire and query data to obtain answers they understand, compared to economic and policy reference point Standards and regulations.

Orchestrate the Data for Clinically Integrated Insight

  • An essential component of our data analytics data mining technology and engagement is a team designed data specification to source the correct raw data.
  • We integrate, transform, and normalize millions of data elements, into rational data models that help to form clarity from complexity.
  • We test our assumptions to ensure that the data meets the designed scope and quality standards
  • Next, as data streams into the data transformation and quality policy system, it is transformed based on the data model and mapped into meaningful events, comparative metrics, and values with the associations to source data that connect them.

HIPAA and HITECH Act Compliant Data Security

Once the data model is implemented, source data courses through data mining and data quality assessments in regular cycles.   Information Safeguards as specified in the HITECH Act and HIPAA Omnibus Rule are implemented so that uses and disclosures of data meet U.S. healthcare Standards.

Data Analytics  Informatics Approach to Derive Knowledge and Meaning from Raw Data

  • Third Party Liability in ERISA Self-Insured Plans and Medicare Secondary Payor Act cases.  Heath care costs are out of control, and injuries represent a significant portion of the cost. Many injuries are the result of some third party cause. As a self-insured healthcare purchaser you don’t want to pay for healthcare that is someone else’s responsibility. Similarly property and casualty insurers have a a duty as a Required Reporting Entity (RRE) to report losses and settlement of litigation to the Centers for Medicare and Medicaid when they cover costs for an insured who is also covered by Medicare.  Payers have no real incentive to control costs if they are not taking the risk.  Third Party Liability analytics benefit the employer, property and casualty insurer and the payer with insights.

Data Mining to Collect The Right Data for Business Objectives and Project Scope – Extraction, Transformation Normalization, and Loading

  • Healthcare Fraud Claim Analytics – Use of data ETL and normalization, business rules based on carrier and provider policies, and industry standards to identify suspicious and potentially fraudulent claims.  Determine the potential magnitude before or after payment to and rapidly to patterns of fraud. Investigate suspicious activity by collecting data from documentation mapped to standard HIPAA EDI 837i (institutional, hospital, ambulatory surgical center) or EDI 837p (professional, physician, lab, etc.) stage of the process from diagnosis to eligibility to care delivery, claim submission adjudication and remittance.
  • Healthcare Service Line Cohorts – Understand the trends by plan design, provider inpatient or outpatient place of service, or medical specialty using root cause analyses using fishbone or Ishikawa diagrams that aggregate data. Based on root causes, act to improve continuity and frame tactics and strategy to enhance service lines.
  • Revenue Cycle Management – Improve the rate and precision of claims processing by identifying ambiguous rules that prevent auto adjudication and result in claims that drop to paper for human review.  Improve the specificity and sensitivity of claim edits, business rules, and clinical coverage determinations using automated decision making.  Channel elaborate or questionable claims for claim utilization review or SIU review.  Integration of unstructured clinical documentation documents and structured electronic data interchange (EDI) data to automate claim reviews in real-time.


Related Topics

Integrated medical bill review and risk management

Revenue Cycle Management analytics

Inspiration for our data analytics and data mining solutions:

Modeling biomedical systems, data, knowledge processing in biomedicine, controlled terminologies, knowledge representation, rule-based systems, description logic, (components of ‘Artificial Intelligence in medicine,’) building systems with ontologies and problem solving methods, 2015. Curating digital health care information non-technical data sources, evidence-based practice cycles for clinical decisions, PICO framework, 2019.

Computational manipulation of knowledge is an important, and often under-appreciated, aspect of biomedical Data Science. The first Data Science initiative from the US National Institutes of Health was entitled “Big Data to Knowledge (BD2K).” The main emphasis of the more than $200M allocated to that program has been on “Big Data;” the “Knowledge” component has largely been the implicit assumption that the work will lead to new biomedical knowledge. However, there is long-standing and highly productive work in computational knowledge representation and reasoning, and computational processing of knowledge has a role in the world of Data Science.   Knowledge-based biomedical Data Science involves the design and implementation of computer systems that act as if they knew about biomedicine.  See NLM Knowledge based biomedical data science.

Examples included methods to use controlled terminology that “…enhances the process of identifying patients who are potentially eligible for clinical trials of experimental therapies in a clinic that is limited by the existence of a singular clinical trial coordinator. Effective implementation of such a system requires the development of a meaningful controlled medical terminology that satisfies the needs of a diverse community of providers all of who contribute to the health care process…”

Logic and Artificial Intelligence Stanford Encyclopedia of Philosophy.

Much of the treatises compare knowledge reuse to software code reuse.   Ontology examples were presented for web search engines, etc. (see Modern Architectures for Intelligent Systems:

Reusable Ontologies and Problem-Solving Methods), however the complexity of mapping codes, for example such as ICD-10 CM or ICD-10 PCS to meaning is much more challenging.

Observations regarding MedlinePlus in contrast to National Library of Medicine, Google Scholar and PubMed articles

The PICO process (or framework) is a mnemonic used in evidence based practice (and specifically Evidence Based Medicine) to frame and answer a clinical or health care related question. The PICO framework is also used to develop literature search strategies, for instance in systematic reviews. The PICO acronym stands for

  • P – Patient, Problem or Population
  • I – Intervention
  • C – Comparison, control or comparator
  • O – Outcome(s) (e.g. pain, fatigue, nausea, infections, death)