Original Article

Determination of Potential Drug–Drug Interactions Using Various Software Programs in a Community Pharmacy Setting

10.4274/tjps.30932

  • Mesut SANCAR
  • Aksa KAŞIK
  • Betül OKUYAN
  • Sevda BATUHAN
  • Fikret Vehbi İZZETTİN

Received Date: 21.08.2017 Accepted Date: 30.11.2017 Turk J Pharm Sci 2019;16(1):14-19 PMID: 32454689

Objectives:

The aim of the present study was to compare various software programs in detecting potential drug–drug interactions in a community pharmacy setting.

Materials and Methods:

Details of prescriptions were collected from 50 community pharmacies located in İstanbul in March and April 2015 (two days per week). From each pharmacy, the first 20 prescriptions that included more than one drug were collected to evaluate potential drug–drug interactions. The following software programs were utilized to detect potential drug–drug interactions: micromedexsolutions.com, medscape.com, and drugs.com. The number of potential interactions detected by the software programs was determined.

Results:

At least one potential drug–drug interaction was detected in 39.2% of the 1000 prescriptions by one of the software programs. According to the rates of total drug–drug interactions gathered from various software programs, these programs gave the following results: medscape.com 33.3%, drugs.com 31.3%, and micromedexsolutions.com 21.2%.

Conclusion:

After comparing different software programs, the potential drug–drug interactions found by the programs proved to be different. Therefore, we recommend that pharmacists confirm with a different program before making a decision when they detect clinically significant potential drug-drug interactions.

Keywords: Drug–drug interactions, software programs, community pharmacy, pharmacist

INTRODUCTION

Drug–drug interactions (DDIs) are considered a drug-related problem that could result in severe consequences. Hospital admission, death, disability, organ failure, and congenital abnormalities can be caused by DDIs. Therefore, evaluation and determination of possible DDIs are essential.

It was determined that DDIs can  result in risk according to results gathered from the reason for admission to emergency departments.1 To eliminate the number of DDIs and their possible detriments, pharmacists should be aware of these possible interactions and must evaluate the clinical relevance of each. Pharmacists should be involved in optimizing medication treatment by preventing harmful DDIs and unsafe utilization of medication. However, pharmacists are exposed to countless warnings including many minor and moderate interactions while using software to detect possible DDIs. As a consequence, major DDIs might be ignored.2

The reliability of software programs commonly used to detect possible DDIs has been evaluated and the concordance rate between each has been investigated. The criterion for many DDIs has not been standardized for every software program. Therefore, some of the programs contained too much data. Hence, most of the time, it is difficult to distinguish clinically significant information.3

In one drug utilization review study retrospectively conducted with a high patient population, it was determined that the possible number of DDIs detected at baseline decreased 70.8% when more sophisticated filtration was applied and it was also observed that this number fell 80.6% after evaluation by a clinical pharmacist.4

Many studies highlighted a problem of inconsistency between these DDI software programs. These studies mostly examined DDI software programs that generally require subscription and paid membership, and in these studies researchers especially chose programs that they had institutional subscriptions to. Fewer evaluated some web sources that could be accessed freely.

Patient-oriented services including clinical pharmacy and pharmaceutical care have recently been developed in Turkey. In accordance with this development, it can be concluded that community pharmacists’ skill to check possible DDIs is still progressing slowly.

Although there are many DDI checking programs in the literature and practical applications, Micromedex and Lexicomp are commonly used programs due to their providing strong and comprehensive evidence including onset, severity, scientific evidence, pharmacologic effects, mechanisms of action, and management of each DDI. In developing countries, Medscape Drug Interaction Checker and the Monthly Index of Medical Specialties Interaction Checker, which are accessible free of charge, are commonly used rather than Micromedex and Lexicomp.5

The aim of the present study was to compare Micromedex with two web-based programs freely accessible (medscape.com and drugs.com) to investigate whether one software program is sufficient to determine possible DDIs in the community pharmacy setting or not. The result of the present study will be important when establishing guidelines to determine DDIs in community pharmacies.


MATERIALS AND METHODS

Details of prescriptions were collected from 50 community pharmacies in İstanbul in March and April 2015 (two days per week). These pharmacies were chosen from among those where fifth-year pharmacy students went to complete their ‘Pharmacy Practice’ course. Oral and written consent was received from the pharmacist after he or she was given information regarding the aim and methods of the present study. Ethical approval was obtained from Marmara University, Institute of Health Science (Approval number: 26.01.2015-7).

Details of the first 20 prescriptions that included more than one drug were collected to evaluate potential DDIs from each pharmacy by students. If the prescription was for a patient under 18 years old, it was excluded from the study.

Patients’ demographic information including age and sex were recorded. The prescriptions that included any drugs not covered by the software programs were excluded.

The following software programs were utilized to detect potential DDIs: micromedexsolutions.com, medscape.com, and drugs.com (Table 1). The possible DDIs were analyzed retrospectively. The interactions were reported as major or serious, moderate or significant, and minor or mild (Table 1).


Statistical analysis

Continuous variables were presented as mean ± standard deviation and ordinal and nominal data were shown as number (n) and percentage (%). The correlation between data was investigated using Spearman’s correlation test. The concordance between these online drug interaction programs according to the results of three severity levels of interaction was checked by evaluating each DDI using kappa analysis. The statistical analysis was done using SPSS for Windows 11.0. p<0.05 was defined as the level of statistical significance.


RESULTS

In each prescription, the mean number of medications was 3.01±1.19 (2-10). At least one potential DDI was detected in 39.2% of a total of 1000 prescriptions by using at least one software program. More than half (58.7%) of the prescriptions for which at least one potential DDI was detected were for female patients. Moreover, the mean of age of these patients was 54.63±17.20. The rates of total DDIs gathered from the various software programs were as follows: medscape.com 33.3%, drugs.com 31.3%, and micromedexsolutions.com 21.2%.  33.3%, drugs.com 31.3%, and micromedexsolutions.com 21.2%. The total numbers of DDIs detected by micromedexsolutions.com, medscape.com, and drugs.com were 389, 917, and 670, respectively. The rate of DDIs detected in prescriptions with all programs was 18%.

When considering the programs in two-pair comparisons, the concordance rate was high and kappa coefficients were of moderate level (Table 2).

The concordance rate of the three programs (which is defined as detecting the number of patients with or without DDI at the same time) was 78.9%, and this rate was lower than the concordance rates obtained in the two-pair comparison, which is shown in Table 2.

When considering two-pair correlations between the programs, Spearman’s r correlation values were 0.629, 0.711, and 0.688 (p<0.001), respectively. These results showed that the two-pair correlations were moderate.

To measure the severity rankings of the three DDI programs, the total number of DDIs without repetition (the number of DDIs was considered as one if the same DDI was obtained for more than one patient or if the same DDI with different mechanisms was considered as more than one DDI) obtained in these three programs in 1000 patients was calculated. The total number of DDIs was calculated as 625 according to the above statement. The rate of these DDIs obtained in Micromedex 2.0® Software Drug Interactions, Medscape Drug Interaction Checker®, and drugs.com was 42.2%, 65.6%, and 74.1%, respectively. The severity ranking scored by three programs for these 625 DDIs was dissimilar (Table 3).

When evaluating the two-pair concordances in programs according to the severity ranking none of them was higher than 50% (Table 4). It was determined that 82 (13.1%) of them were scored with the same severity level in all three programs among a total of 625 DDIs. Most of them (68) among these 82 DDIs were ranked as moderate DDIs. The major DDIs classified as major by Micromedex numbered 89 and only 12 of them were defined as major DDIs with the other two DDI programs used in the present study.

When considering two-pair correlations between the three programs according to the severity ranking, Spearman’s r correlation values were 0.222 (p<0.001), 0.366 (p<0.001), and 0.061 (p=0.125), respectively. These results showed that the two-pair correlations were moderate.


DISCUSSION

In the literature, the studies that evaluated more than one DDI software program usually emphasized the difference between each program and they were compared especially in terms of their severity classifications. However, the three DDI software programs evaluated in the present study had similar classification systems when evaluating the clinical consequences of each possible DDI. Community pharmacists mostly prefer the freely accessible DDI software programs because of economic concerns. For that reason, two web-based DDI software programs were chosen in the present study. To compare these programs, Micromedex, which is utilized as a comprehensive drug information source, was selected. The researchers’ university library had a subscription to Micromedex and in the present study, conducted during fifth year students’ pharmacy courses, as a part their assignment during this course, all students could subscribe to Micromedex and could check possible DDIs in the prescriptions. The 1000 patient prescriptions were selected and analyzed by the researchers again in accordance with the purpose of the present study.

In the present study, which assessed possible DDIs in 1000 patient prescriptions in a community pharmacy setting with three DDI software programs, it was found that Micromedex detected possible DDIs in the fewer patients (21.2%) when compared with the other software programs. Moreover, comparison of the total number of possible DDIs in each program obtained showed that Micromedex detected half the number obtained by the other two DDI programs. Medscape DDI checker software evaluated separately each DDI with more than one mechanism attributed and scored with several severities. This discrepancy could be caused by the fact that in Medscape it was determined as a separate DDI in cases where more than one mechanism occurred. Moreover, the number of minor interactions found in Medscape is higher than that of the other programs. This could be reason for the higher total number of possible DDIs obtained in Medscape.

Similarly, Oshikoya et al.5 obtained a total of 596 potential DDIs in 280 patients with HIV and 84.6% of them were detected in Medscape and only 50.7% of them were obtained in USA MIMS (Monthly Index of Medical Specialties Interaction Checker). The rate of DDI was 46.1% and the correlation between severity scores was weak.

Olvey et al.6 compared Micromedex with two standard software programs: Drug-Reax and Drug Interactions: Analysis and Management by analyzing DDI lists at the US Department of Veterans Affairs (VA). According the result of that study, 13.7% of a total of 982 DDIs considered as critical by VA were detected in all three software programs and the concordance between programs was low. In the present study, the rate of DDIs detected in prescriptions with all programs was 18%. Binary concordance rates based on number of patient prescriptions obtained by the DDI software programs were approximately 84-88% and the kappa coefficient was between 0.6 and 0.7. On the other hand, when all of them were analyzed, the concordance rate was under 80%. These results and correlation values showed that there was a moderate concordance between all three DDI software programs according the number of patient prescriptions. When compared with other studies, the concordance rate was higher in the present study. Vonbach et al.3 found a total of 157 DDIs by using Drug Interaction Facts, Drug-Reax, Lexi-Interact, and Pharmavista, and only 11% of them were detected by all of the DDI software programs. In that study, none of the DDI software programs could determine more than 50% of the total DDIs.

Bergk et al.7 determined that 33% of them were similar in all DDI programs when they compared clinically significant DDIs by utilizing German SmPC, DRUGDEX, Hansten/Horn’s Drug Interaction Analysis and Management, and Stockley’s Drug Interaction programs.

Chao and Maibach8 compared four DDI compendia (Mosby’s GenRx, USP DI, AHFS Drug Information, and the Physicians’ Desk Reference) most commonly utilized in the USA in their study by screening DDIs and the most prescribed four medications involved in dermatology services, and these programs were incompatible. The concordance rate was reduced when more than two software programs were compared. Only 8.9% of the total number of DDIs were found in all four DDI compendia. Therefore, Chao and Maibach8 suggested reassessment of these programs according to information in the literature and the clinical relevance of each DDI.

In another study that compared BNF with the programs Medicine Compendia (eMC) and DailyMed, it was found that BNF obtained two-fold more DDIs when compared with DailyMed and 63.9% of them were found with only one compendium and the rate of DDIs detected in all three compendia was 15.12%.9 A weak correlation coefficient (0.366) was measured between the three compendia. It was stated that this incompatibility was caused by the difference between drug classifications in the three systems and also the source of DDIs in the programs was not presented.9

The difference in the total number of possible DDIs did not cause this discordance between most of the various DDI programs and it was suggested that this could be caused because of differences in the severity classification in these programs.10,11,12,13

The concordance between the DDI programs used in the present study was high in terms of the number of patients detected with possible DDI in each program when compared with previous studies mentioned above. Although the DDI programs used in the present study were quite similar to each other according to the severity classification of possible DDIs, the concordances regarding the rate of severity ranking were low. The rates of concordance in two-pair comparisons of the DDI programs were approximately less than 50% and the kappa coefficients were also relatively low in the present study. Only 13.1% of a total of 625 DDIs were scored with the same severity level in all three programs. The major DDIs classified as major by Micromedex numbered 89 and only 12 of them were defined as major DDIs by the two other DDI programs used in the present study.

Vitry14 found the rate of major interactions obtained by at least one program was between 14% and 44% when they compared four different programs and mentioned inconsistency between programs according to the grading of the severity and the quality of their supporting evidence. Vitry14 stated the reasons for this discordance between programs as various inclusion criteria, different information sources, and dissimilar therapeutic drug classifications in each program used, and also the severity classification based on the clinical relevance of each DDI was not common between the programs.

Ekstein et al.15 found more than 30% of interactions in at least one program when they compared three different DDI programs according to antiepileptic drugs in their study. In that study, the concordance rate was less than 30% even if severity levels were classified as high between programs. These discrepancies could be attributed to differences in definitions and terminology in each program, various clarifications of information in the literature, and different classifications of drugs used in various DDI programs.

It is well known that DDI programs should be more sensitive and specific for practical usage by pharmacists.16,17 Reis and Cassiani18 compared DDI programs by selecting one of them as the gold standard and calculated their sensitivity and specificity. In that study, the limitations of DDI programs were emphasized and evaluation of DDI programs chosen for detection of possible DDIs in a hospital setting was suggested.

Some of the possible DDIs were definitely different between programs in the present study. For example, some of the experts defined polypharmacy if two NSAIDs were present in the same prescription. Only Medscape warned of a moderate (significant) interaction for this situation when the DDI programs used in this study are considered. The other programs did not report any interaction between two NSAIDs if they were prescribed concurrently. Discordance between programs could be slightly attributed to this kind of interaction, which was obtained in 21 of 1000 patients in the present study. All these discrepancies raised the question of which DDI program should be selected as the gold standard when the sensitivity and specificity of DDI programs are evaluated.

Based on the result of the present study and other studies in the literature, DDI programs should be re-evaluated to improve concordance between them by assessing evidence-based outcomes and severity classifications. According to the report by the consensus panel where it was evaluated and evidence of DDIs in the process of clinical decision, the following statements were offered to obtain highly qualified information from DDI programs: consistent terminology should be constituted, the Drug Interaction Probability Scale should be utilized to assess case reports regarding possible DDIs, a new approach should be formed to evaluate evidence regarding DDIs, the assessment of FDA documents and drug leaflets should be performed with the same criterion like evidence reported, and when evidence is detected, this possible DDI should be classified according to therapeutic/pharmacology groups.19

The following suggestions would improve patient safety: well-designed studies should be conducted to determine the incidence, outcomes, and patient-related risk factors of DDIs; algorithms should be produced for defining systematic and clear processes of assessing evidence to evaluate the risk and severity of possible DDIs; and evidence of possible DDIs should be integrated into electronic systems.20

Because of discordance between DDI programs, when pharmacists detect a major DDI and/or any DDI in clinically critical patients, they should confirm that using another DDI program. Although it seems time consuming, this could result in elevated patient safety. Therefore, it was suggested that health care providers should check possible DDIs with more than one DDI program in clinically critical patients such as those with HIV.5


Limitation of the study

In the present study, only three software programs were used, because the ones chosen had similar severity classification properties and the two web-based programs used are freely accessible worldwide including Turkey. One of the limitations of the present study was that Rx Media Pharma was not used, which is the most commonly utilized Turkish drug information sources. The number of the prescriptions analyzed in the present study was large. This allowed evaluation of different medications and diseases with a large number of them. Although this might seem to be advantage to assess possible DDIs comprehensively, some experts might consider it a limitation because of the lack of concordance demonstrated between special medication groups such as antiepileptics, antidepressants, and anticoagulants.


CONCLUSIONS

A high rate of potential DDIs was detected in a community pharmacy setting in the present study. After comparison of various software programs, it was found that potential DDIs reported in various software programs were different from each other. Therefore, we recommend that pharmacists confirm with a different DDI program before making a decision when they detect clinically significant potential drug-drug interactions.


ACKNOWLEDGEMENT

This study is supported by Marmara University Scientific Research Projects Committee (SAG-D-071015-0473).

Conflicts of Interest: No conflict of interest was declared by the authors.

Images

  1. Raschetti R, Morgutti M, Menniti-Ippolito F, Belisari A, Rossignoli A, Longhini P, La Guidara C. Suspected adverse drug events requiring emergency department visits or hospital admissions. Eur J Clin Pharmacol. 1999;54:959-963.
  2. Indermitte J, Beutler M, Bruppacher R, Meier CR, Hersberg KE. Management of drug-interaction alerts in community pharmacies. J Clin Pharm Ther. 2007;32:133-142.
  3. Vonbach P, Dubied A, Krahenbühl S, Beer JH. Evaluation of frequently used drug interaction screening programs. Pharm World Sci. 2008;30:367-374.
  4. Peng CC, Glassman PA, Marks IR, Fowler C, Castiglione B, Good CB. Retrospective drug utilization review: incidence of clinically relevant potential drug-drug interactions in a large ambulatory population. J Manag Care Pharm. 2003;9:513-522.
  5. Oshikoya KA, Oreagba IA, Ogunleye OO, Lawal S, Senbanjo IO. Clinically significant interactions between antiretroviral and co-prescribed drugs for HIV-infected children: profiling and comparison of two drug databases. Ther Clin Risk Manag. 2013;9:215-221.
  6. Olvey EL, Clauschee S, Malone DC. Comparison of Critical Drug-Drug Interaction Listings: The Department of Veterans Affairs Medical System and Standard Reference Compendia. Clin Pharmacol Ther. 2010;87:48-51.
  7. Bergk V, Haefeli WE, Gasse C, Brenner H, Martin-Facklam M. Information deficits in the summary of product characteristics preclude an optimal management of drug interactions: a comparison with evidence from the literature. Eur J Clin Pharmacol. 2005;61:327-335.
  8. Chao SD, Maibach HI. Lack of drug interaction conformity in commonly used drug compendia for selected at-risk dermatologic drugs. Am J Clin Dermatol. 2005;6:105-111.
  9. Nikolic BS, Ilic MS. Assessment of the consistency among three drug compendia in listing and ranking of drug-drug interactions. Bosn J Basic Med Sci. 2013;13:253-258.
  10. Abarca J, Malone DC, Armstrong EP, Grizzle AJ, Hansten PD, Van Bergen RC, Lipton RB. Concordance of severity ratings provided in four drug interaction compendia. J Am Pharm Assoc (2003). 2004;44:136-141.
  11. Fulda TR, Valuck RJ, Vander Zanden JV, Parker S, Byrns PJ; Disagreement among drug compendia on inclusion and ratings of drug-drug interactions. Current Therapeutic Research. 2000;61:540-548.
  12. Hines LE, Ceron-Cabrera D, Romero K, Anthony M, Woosley RL, Armstrong EP, Malone DC. Evaluation of Warfarin Drug Interaction Listings in US Product Information for Warfarin and Interacting Drugs. Clin Ther. 2011;33:36-45.
  13. Martins MA, Carlos PP, Ribeiro DD, Nobre VA, César CC, Rocha MO, Ribeiro AL. Warfarin drug interactions: a comparative evaluation of the lists provided by five information sources. Eur J Clin Pharmacol. 2011;67:1301-1308.
  14. Vitry AI. Comparative assessment of four drug interaction compendia. Br J Clin Pharmacol. 2007;63:709-714.
  15. Ekstein D, Tirosh M, Eyal Y, Eyal S. Drug interactions involving antiepileptic drugs: Assessment of the consistency among three drug compendia and FDA-approved labels. Epilepsy Behav 2015;44:218-224.
  16. Warholak TL, Hines LE, Saverno KR, Grizzle AJ, Malone DC. Assessment tool for pharmacy drug-drug interaction software. J Am Pharm Assoc (2003). 2011;51:418-424.
  17. Sweidan M, Reeve JF, Brien JA, Jayasuriya P, Martin JH, Vernon GM. Quality of drug interaction alerts in prescribing and dispensing software. Med J Aust. 2009;190:251-254.
  18. Reis AM, Cassiani SH. Evaluation of three brands of drug interaction software for use in intensive care units. Pharm World Sci. 2010;32:822-828.
  19. Scheife RT, Hines LE, Boyce RD, Chung SP, Momper JD, Sommer CD, Abernethy DR, Horn JR, Sklar SJ, Wong SK, Jones G, Brown ML, Grizzle AJ, Comes S, Wilkins TL, Borst C, Wittie MA, Malone DC. Consensus Recommendations for Systematic Evaluation of Drug-Drug Interaction Evidence for Clinical Decision Support. Drug Saf. 2015;38:197-206.
  20. Hines LE, Malone DC, Murphy JE. Recommendations for Generating, Evaluating, and Implementing Drug-Drug Interaction Evidence. Pharmacotherapy. 2012;32:304-313.