We at Glassbeam always strive to expand our portfolio of supported products by listening to our customers and understanding their areas of acute pain. As we rolled out our utilization analytics solution in Clinsights™, we spoke to various radiology groups to understand what gaps still need to be addressed. One recurring issue that came up is the ability to understand the reject ratio for technologists, particularly in the Digital Radiography department.
With high patient volumes and a large number of trainees, especially in teaching hospitals, there is a significant need to ensure that quality of care and quality of results weren’t compromised. By verifying the images in the PACS, one can ensure the results meet the standards of the hospital. But what about all those images that never make it to the PACS? Most DR systems allow the technologist to discard images that they deemed “bad”, and reasons can be numerous:
- Positioning error – Not the right alignment, angle, etc.
- Motion – Detector or patient movement resulting in motion blur and other artifacts
- Anatomy cut-off – Not getting full coverage of the body part to be scanned
- Over Exposure/Under Exposure – Incorrect exposure settings resulting in regions being overblown or too dark for diagnosis
- Collimator errors – Due to system defects, failing tubes, etc.
Following pie chart shows a sampling of reject reasons from one of our customer sites:
The more rejections, the longer it takes to perform the exam and the more doses the patient receives. This directly translates to the quality of care and patient satisfaction. Also, redundant scans reduce the life of the x-ray tube. The data we have analyzed so far has suggested that positioning error is, by far, the most common reason for rejected images. This directly points to human errors, which can often be corrected with training and practice. But in order to target the right training for the right people, the hospital needs to analyze the volume of rejected images.
Unfortunately, most hospitals we have spoken with struggle with this problem of doing Reject Analysis for the following reasons:
- The rejected images and their associated data never leave the machine. So any analysis requires manual data collection
- Even if any automated data collection is provided by the OEM, the analysis will be limited to that OEM alone, resulting in data silos
- Most tools provided by the OEM are not big data ready, so they tend to fall apart with large data volumes or users
- These tools are not enterprise-ready, so they fail to integrate well into the organization overall reporting needs
We took up this challenge and have now built an OEM agnostic solution specifically for the DR modality, one that automatically collects reject data from each device and seamlessly integrates into our big data warehouse so that it can be viewed in context of data from any of the other data sources such as PACS, EMR, RIS, CMMS, Machine Logs, RTLS, etc.
So far we have worked with Canon and Fuji based DR systems, with the goal to cover all the big players in the market. Combined with our dose analysis solution, they can also ensure dose violations are accurately reported and addressed, regardless of whether the images were sent to the PACS. This allows the radiology department to aggregate their reject data across their entire fleet, across vendors, and across networks. At one glance, one can see the reject ratios by the individual system, technologist or exam type. This helps drive an enterprise-wide training program focused on making improvements where improvements are necessary.
I suggest these resources for further reading. These information pieces compliment this blog in addition to helping dig deep into our positioning in the Healthcare market. Give it a read. Also, do reach out to me at firstname.lastname@example.org or my peers at sales@glassbeam if you are looking at the demo of Clinsights.