Imagine you’re trying to tell apart three very similar-looking leaves to figure out which one might be from a plant that’s not healthy, but the pictures you have to work with are a bit blurry.
This is similar to the challenge doctors face when they use ultrasound images to diagnose breast cancer and need to categorize tumors into three specific groups that help them understand the risk level. The groups are pretty close to each other in terms of how they look, making it tough to differentiate between them.
To help with this, researchers have created a new tool called IDCsingle bondNet, a smart AI-driven computer program that combines the best parts of two existing technologies to analyze these images more accurately.
It’s like giving the doctors a super-powered magnifying glass that not only clears up the blur but also highlights the tiny, crucial details that tell the groups apart, all without needing a supercomputer to do it.
A recent study that put the tool to the test focuses on improving the diagnosis of breast cancer using ultrasound images, particularly for categorizing breast lesions into three specific sub-categories (4a, 4b, and 4c) according to the BI-RADS system.
These categories help doctors understand the risk level of breast cancer more precisely, but distinguishing between them is challenging due to the low resolution of ultrasound images and the similarity between the categories.
To address this, researchers developed a new type of neural network called IDCsingle bondNet. This network combines the strengths of two existing technologies: convolutional neural networks (CNNs) and Capsule Networks (CapsNet). CNNs are good at extracting detailed local information from images, while CapsNet excels at understanding the spatial relationships within an image, such as the position and orientation of objects.
The IDCsingle bondNet includes a section called ID-Net, built on the CNN architecture, which uses specific building blocks to make the network both deep (for detailed analysis) and wide (for broader analysis), ensuring it can extract a rich amount of information from images without becoming too complex or heavy.
Another part of IDCsingle bondNet uses CapsNet to complement the CNN’s capabilities by focusing on the global features of the image, like position and posture, which CNNs might overlook.
The network is designed to be lightweight, meaning it can perform these tasks without requiring excessive computational resources, making it practical for real-world applications.
The effectiveness of IDCsingle bondNet was tested using a dataset from the Yunnan Cancer Hospital and two public datasets, showing superior performance in accurately classifying the breast ultrasound images into the BI-RADS 4a, 4b, and 4c categories compared to five other existing methods. The results were impressive, with the network achieving high accuracy, precision, and F1 score, all at 98.54%.
This advancement could significantly improve the accuracy of breast cancer diagnoses and reduce the psychological burden on patients by providing more precise assessments of their condition.