Articles in Press

Original Article(s)

  • XML | PDF | downloads: 120 | views: 227 | pages:

    Abstract

    Purpose: Magnetoencephalography is the recording of magnetic fields resulting from the activities of brain neurons and provides the possibility of direct measurement of their activity in a non-invasive manner. Despite its high spatial and temporal resolution, magnetoencephalography has a weak amplitude signal, drastically reducing the signal-to-noise ratio in case of environmental noise. Therefore, signal reconstruction methods can be effective in recovering noisy and lost information.

    Materials and Methods: The magnetoencephalography signal of 11 healthy young subjects was recorded in a resting state. Each signal contains the data of 148 channels which were fixed on a helmet. The performance of three different reconstruction methods has been investigated by using the data of adjacent channels from the selected track to interpolate its information. These three methods are the surface reconstruction methods, partial differential equations algorithms, and finite element-based methods. Afterward to evaluate the performance of each method, R-square, root mean square error, and signal-to-noise ratio between the reconstructed signal and the original signal were calculated. The relation between these criteria was checked through proper statistical tests with a significance level of 0.05.

    Results: The mean method with the root mean square error of 0.016  0.009 (mean  SD) at the minimum time (3.5 microseconds) could reconstruct an epoch. Also, the median method with a similar error but in 5.9 microseconds with a probability of 99.33% could reconstruct an epoch with an R-square greater than 0.7.

    Conclusion: The mean and median methods can reconstruct the noisy or lost signal in magnetoencephalography with a suitable percentage of similarity to the reference by using the signal of adjacent channels from the damaged sensor.

  • XML | PDF | downloads: 255 | views: 210 | pages:

    Purpose: There is a growing interest in the clinical application of new PET radiopharmaceuticals. This study focuses on using 64Cu-DOTA-Trastuzumab for Positron Emission Tomography–Computed Tomography (PET/CT) imaging in gastric cancer patients. It aims to enhance the understanding of its bio-kinetic distribution and absorbed dose for safe and practical application in nuclear medicine.

    Materials and Methods: The study was conducted at the Agricultural, Medical, and Industrial Research School (AMIRS), where 64Cu was produced and purified. The radiopharmaceutical 64Cu-DOTA-Trastuzumab was prepared, and three patients with confirmed Human Epidermal growth factor Receptor 2 (HER2)-positive gastric cancer underwent PET/CT scans at 1, 12 and 48 hours post-injection. Images were gained using a Discovery IQ PET/CT system and analyzed for an SUV. Bio-distribution was modeled using a two-exponential function, and absorbed doses were calculated using IDAC-Dose 2.1 software. CT doses were also evaluated.

    Results: The study found that post-injection imaging at 12 hours or more provided superior image quality. The liver exhibited the highest cumulative activity, followed by the spleen and other organs. The effective dose estimates for 64Cu-DOTA-Trastuzumab were within acceptable limits. CT dose calculations revealed that sensitive organs received higher doses.

    Conclusion: This study successfully assessed the bio-kinetic distribution and absorbed dose of 64Cu-DOTA-Trastuzumab in gastric cancer patients, demonstrating its safety and potential for clinical use. The optimal timing for PET/CT imaging and dosimetry data can inform clinical decision-making. Further research is warranted to explore the therapeutic potential of 64Cu-DOTA-Trastuzumab and to establish clinical guidelines for its use.

  • XML | PDF | downloads: 135 | views: 216 | pages:

    Purpose: The present study aimed to evaluate the effectiveness of incorporating of nanohydroxyapatite in to hydrogen peroxide bleaching material on color, microhardness and morphological features of dental enamel.

    Materials and Methods: 33 sound maxillary first premolar were used for the study. Enamel blocks (7mm× 5mm×3mm) were prepared from the middle third of buccal halves of each tooth. Each dental block was embedded in self-curing acrylic resin with exterior enamel surface exposed for various applications. The dental blocks were  randomly divided into three groups (n=11) according to the bleaching technique. The groups were designed as follows: control; hydrogen peroxide (HP) and hydrogen peroxide with nanohydroxyapatite (HP-nHAp) groups. Color measurements and microhardness tests were conducted before and after treatment. one sample represented each group was selected for morphological analysis.

    Results: The  results showed  that both HP and HP-nHAp groups induced color changing. Enamel microhardness loss of HP group was significantly higher than that of HP-nHAp and control groups. The enamel morphological changes was only observed in HP group.

    Conclusion: nHAp could significantly reduce the enamel microharness loss caused by HP while preserving enamel surface morphological features without affecting bleaching efficacy.

  • XML | PDF | downloads: 76 | views: 192 | pages:

    Purpose: Diabetes, resulting from insufficient insulin production or utilization, causes extensive harm to the body. The conventional diagnostic methods are often invasive. The classification of diabetes is essential for effective management. The progression in research and technology has led to additional classification approaches. Machine Learning (ML) algorithms have been deployed for analyzing the huge dataset and classifying diabetes.

    Materials and Methods: The classification and the regression of diabetic and non-diabetic are performed using the XGBoost mechanism. On the other hand, the proposed class-centric Focal XG-Boost is applied to elevate the model performance by measuring the similarity among the features. The prediction of the model is based on the classification and regression rates of diabetic and non-diabetic individuals, which are anticipated using applicable and effectual metrics to estimate their working performance.

    The dataset used in the Class-Centric Focal XG Boost model is attained using the Arduino Uno Kit. The data collection is done under a sampling rate of 100 Hz. The data are gathered from Bharati Hospital Pathology Laboratories, located in Pune.

    Results: The inclusive outcomes of the proposed model with their appropriate Exploratory Data Analysis (EDA) among classification and regression, with the suitable dataset used in the study are exemplified.

    Conclusion: The proposed Class-Centric Focal XG Boost model has numerous advantages and is less delicate to the hyperparameters than the conventional XGBoost algorithm. As a part of the real-time application of the Class-Centric Focal XG Boost model, the model can be utilized in other communicable and communicable disease classification and detection.

  • XML | PDF | downloads: 108 | views: 189 | pages:

    Purpose: People with Down Syndrome must be served special because they have an intellectual disability with abnormality in memory and learning, so, creating a model for DS recognition may provide safe services to them, using the transfer learning technique can improve high metrics with a small dataset, depending on previous knowledge, there is no available Down syndrome dataset, one can use to train.

    Materials and Methods: A new dataset is created by gathering images, two classes (Down=209 images, non-Down=214 images), and then expanding this dataset using Augmentation to be the final dataset 892 images (Down=415images, Non-Down=477 images. Finally, using a suitable training model, in this work, Xception and Resnet models are used, the pretrained models are trained on Imagenet dataset which consists of (1000) classes.

    Results: By using Xception model and Resnet model, it concluded that when using Resnet model the accuracy = 95.93% and the loss function =0.16, while by using Xception model, the accuracy =96.57% and the loss function =0.12.

    Conclusion: A transfer learning is used, to overcome the suitability of dataset size and minimize the cost of training, and time processing the accuracy and loss function is good when using Xception model, in addition, the Xception metrics are the best by comparing with the previous studies.

  • XML | PDF | downloads: 151 | views: 276 | pages:

    Purpose: In this study, we propose a novel generalizable hybrid underlying mechanism m for mapping Human Pose Estimation (HPE) data to muscle synergy patterns, which can be highly efficient in improving visual biofeedback.

    Materials and Methods: In the first step, Electromyography (EMG) data from the upper limb muscles of twelve healthy participants are collected and pre-processed, and muscle synergy patterns are extracted from it. Concurrently, kinematic data are detected using the OpenPose model. Through synchronization and normalization, the Successive Variational Mode Decomposition (SVMD) algorithm decomposes synergy control patterns into smaller components. To establish mappings, a custom Bidirectional Gated Recurrent Unit (BiGRU) model is employed. Comparative analysis against popular models validates the efficacy of our approach, revealing the generated trajectory as potentially ideal for visual biofeedback. Remarkably, the combined SVMD-BiGRU model outperforms alternatives.

    Results: the results show that the trajectory generated by the model is potentially suitable for visual biofeedback systems. Remarkably, the combined SVMD-BiGRU model outperforms alternatives. Furthermore, empirical assessments have demonstrated the adept ability of healthy participants to closely adhere to the trajectory generated by the model output during the test phase.

    Conclusion: Ultimately, the incorporation of this innovative mechanism at the heart of visual biofeedback systems has been revealed to significantly elevate both the quantity and quality of movement.

  • XML | PDF | downloads: 163 | views: 315 | pages:

    Purpose: Integrating magnetic Nanoparticles (NPs) into contrast-enhanced Magnetic Resonance (MR) imaging can significantly improve the resolution and sensitivity of the resulting images, leading to enhanced accuracy and reliability in diagnostic information. The present study aimed to investigate the use of targeted trastuzumab-labeled iron oxide (TZ-PEG-Fe3O4) NPs to enhance imaging capabilities for the detection and characterization of Breast Cancer (BC) cells.

    Materials and Methods: The NPs were synthesized by loading Fe3O4NPs with the monoclonal antibody TZ. Initially, Fe3O4 NPs were produced and subsequently coated with Polyethylene Glycol (PEG) to form PEG- Fe3O4 NPs. The TZ antibody was then conjugated to the PEG- Fe3O4 NPs, resulting in TZ-PEG-Fe3O4 NPs. The resulting NPs were characterized using standard analytical techniques, including UV-Vis spectroscopy, FTIR, SEM, TEM, VSM, and assessments of colloidal stability.

    Results: Analyses indicated that the targeted TZ-PEG-Fe3O4 NPs exhibited a spherical morphology and a relatively uniform size distribution, with an average diameter of approximately 60 nm. These results confirmed the successful synthesis and controlled fabrication of the Fe3O4 NPs, which is crucial for developing effective Contrast Agents (CAs) for medical imaging applications. Additionally, the study confirmed the biocompatibility and magnetic properties of the synthesized TZ-PEG-Fe3O4 NPs.

    Conclusion: The findings suggest that the developed targeted TZ-PEG-Fe3O4 NPs have significant potential as effective CAs for MR imaging of BC cells.

  • XML | PDF | downloads: 134 | views: 116 | pages:

    Background: The prevalence of coronavirus has increased the use of CT scans, a high-exposure imaging technique. This study was designed to estimate organ dose and effective dose to investigate the lifetime attributable risks (LARs) of cancer incidence and mortality in COVID-19 patients. 600 patients who had COVID-19 or were suspected of having it, were included in the current study.

    Methods: Dosimetric parameters such as dose length product (DLP), volumetric CT dose index (CTDIV), and scan length, were used to estimate the patient’s dose and cancer risk. The ImPACT CT dosimetry software was also used to calculate organ doses and effective doses. The cancer risk was calculated using the National Academy of Sciences' Biologic Effects of Ionizing Radiation (BEIR VII) report.

    Results: For females, the mean effective dose based on International Commission Radiation Protection 103 (ICRP103) and ICRP 60 was 2.36 ± 0.48 mSv and 1.2 ± 0.28 mSv, respectively. For males, this parameter was 2.31 ± 0.53 mSv and 1.21 ± 0.45 mSv based on ICRP103 and ICRP60, respectively. For males, the mean LAR of all cancer incidence and cancer mortality was 14.79 ± 4.85 and 8.59 ± 2.42 per 100000 people, respectively. For females, these parameters were 23.37 ± 9.59 and 12.61± 3.89 per 100000 people, respectively.

    Conclusion: Chest CT scan examination connected with a non-considerable radiation dose and risk of cancer. So according to the ALARA principle, CT protocol must be optimized to limit radiation-induced risk.

  • XML | PDF | downloads: 112 | views: 189 | pages:

    Purpose: The process of making a decision based on available sensory information is called “Perceptual Decision Making”. The manner in which this decision is made has a direct impact on a person's social and personal relationships. Despite numerous studies in the field of perceptual decision making, there is still no robust system that can recognize people's perceptual decisions objectively. To this aim, this study aims to examine the relationship between EEG signals and perceptual decision making in healthy individuals.

    Materials and Methods: The research employs an online EEG dataset based on visual stimuli, including faces and cars, obtained from 16 participants. Since there is no binary decision-making mode in the brain and there is an uncertainty in which each option has a special weight in decision-making and finally the option that passes a threshold is selected, this research has tried to incorporate this uncertainty into the final model to improve perceptual decision recognition system performance. For this purpose, a fuzzy radial basis function (FRBF) network was utilized.

    Results: After extracting 26 features from the preprocessed EEG signals, Friedman’s non-parametric statistical analysis was performed, revealing that differences in the coherence of stimulus representations have a greater impact on an individual's decision-making process than spatial prioritization. Then, FRBF network classifier, with the extracted features from TP9 and TP10 channels as input, achieved an accuracy of 90.3% in classifying the test data as either a "face" or "car".

    Conclusion: The classification accuracy results showed that the proposed method is an effective procedure for recognition of human decisions.

  • XML | PDF | downloads: 141 | views: 199 | pages:

    Purpose: The phantom-less patient-specific quality assurance (PSQA) for intensity‐modulated radiotherapy (IMRT) plan verification has been exploited recently. The aim of this study was the feasibility of the PSQA of the plan based on a log file and onboard detector for prostate patients in helical tomotherapy.

    Method: For 15 prostate patients, the quality assurance (QA) of the helical tomotherapy plan was performed using the Delta4 phantom and Cheese phantom to evaluate the spatial dose distribution and point dose, respectively. These parameters were also reconstructed by delivery analysis (DA) software using the measured leaf open times (LOTs). The gamma analysis and relative dose difference were used to compare the measured and reconstructed dose with the calculated values. Then, using the relative discrepancy, the log file and onboard detector data were compared with the expected data to assess machine performance.

    Results: The mean relative dose difference was within 1.3% among the measurement, reconstruction, and calculation. The results of statistical analysis and p-value showed there is no statistically significant difference in determining the dose difference between the DA-based and conventional QA methods. The gamma values of 3%/3mm, 3%/2mm, 2%/3mm, 2%/2mm, 2%/1mm, and 1%/1mm for the DA-based QA method were the same as the measurement QA method. However, the gamma values of 3%/1mm, 1%/3mm, and 1%/2mm were comparable. The mean percentage difference LOTs was 0.07%, and most differences occurred in very low and some high LOTs. The relative difference was lower than 2.30% for the couch speed, couch movement, monitor unit, and rotation per minute (RPM) gantry between the log file and expected data.

    Conclusion: The DA software is an efficient alternative to the measurement-based PSQA method. However, the accuracy of the DA software requires further investigations for gamma analysis at strict criteria. The very low and high LOTs may lead to the dose discrepancy. The tomotherapy machine can accurately implement the planned parameters.

  • XML | PDF | downloads: 158 | views: 202 | pages:

    Purpose: Evaluating the impact of various surface treatments on the adhesive strength between resin cement and zirconia surface.

    Material and methods: Using an STL file, 60 monolithic zirconia discs (Vita YZ HT) with dimensions of 10 millimeters in diameter and 2 millimeters in height were produced. They were machined, sintered, and the surface was smoothed using 600, 800, and 1200 grit aluminum oxide paper. Four groups were created based on the surface treatment applied to the discs: no treatment (control), sandblasting, potassium hydrogen difluoride and Zircos-E solution. Resin cement cylinders (Panavia V5; Kuraray Noritake) were applied on zirconia discs using a custom mold. The shear bond strength was assessed subsequent to thermocycling. The scanning electron microscope (SEM) has been utilised to analyse the morphological alterations of a specimen from every group. A post-hoc Tukey's test (P < 0.05) and a two-way ANOVA were used to statistically analyse the data.

    Results: The data analysis showed that the maximum shear bond strength values, measured at 128.933 ± 2.764Mpa, were obtained via airborne particle abrasion with 50-µm Al2O3. The values obtained by the control group were the lowest, at 50.933 ± 9.573 Mpa. The use of 50-µm Al2O3 in airborne particle abrasion caused a significant increase in shear bond strength values (p<0.05).

    Conclusion: The adhesive strength between zirconia and resin cement was improved by surface treatments, and airborne particle abrasion with 50-µm Al2O3 was shown to be an effective way to increase bond strength.

  • XML | PDF | downloads: 124 | views: 188 | pages:

    Purpose: Dental caries can emerge anywhere in the mouth particularly in the interior of the cheeks and the gums. Some of the indications are patches on the inner lining of the mouth, along with bleeding, toothache, numbness and an unusual red and white staining. Hence, it is important to predict the presence of cavity at an early stage. The currently available manual method is inefficient and hence we provide an advanced method by using the deep learning concepts.

    Materials and Methods: In this work, different types of algorithms such as Res Net, Deeper Google Net and mini VGG Net are to be used to predict the class of cavity at an early stage.

    Results: A comparison between the accuracy of three different algorithms is given in this paper. Thus, by using efficient deep learning algorithms, it will be able to predict the presence of cavity and the class of cavity at an early stage and take necessary steps to overcome it.

    Conclusion: In this work, a comparison between three different algorithms is given and proved that the efficient algorithm is the inception algorithm among the other algorithms and achieve an accuracy of about 98%, which is suitable for use in hospitals.

  • XML | PDF | downloads: 153 | views: 132 | pages:

    Abstract

    Purpose: The dose of Computed tomography (CT) scan exams consists of a large proportion of all medical imaging modalities’ dose burdens. There are different methods to measure and describe radiation in CT. A standardized way is to measure the Computed Tomography Dose Index (CTDI). However, due to the increase in the detector system size along the z-axis in new CT scanners generations, new measurement methods are described in the American Association of Physicists in Medicine-Task Group No.111(AAPM-TG111). This study aims to estimate the equilibrium dose and compare it with the dose displayed in the volume computed tomography dose index (CTDIvol) at the end of each exam. Eventually, the effective dose was calculated for both methods.

    Material and Methods: Using standard phantom of polymethylmethacrylate (PMMA) and pencil ionization chamber, the values of CTDI100, ( CTD100), CTDIvol, cumulative dose, equilibrium dose, and effective dose were calculated.

    Results: Six protocols performed in two centers and the results indicated that the measurements with a standard CT dosimetry phantom, was varied between average equilibrium dose and CTDIvol and the discrepancies ranged between 26% to 35%.

    Conclusion: the CTDIVol is not suitable to evaluate the radiation dose at the end of each scan and the use of an equilibrium dose for dosimetry of new systems is recommended.

    Keywords: Multidetector computed tomography, Equilibrium dose, Computed tomography volume dose index, AAPM-TG 111, Radiation dosimetry

  • XML | PDF | downloads: 79 | views: 291 | pages:

    Background: For whole-body (WB) kinetic modeling based on a typical positron emission tomography (PET) scanner, a multipass multibed scanning protocol is necessary because of the limited axial field of view. Such a protocol introduces loss of early dynamics of the time-activity curve (TAC) and sparsity in TAC measurements, inducing uncertainty in parameter estimation when using prevalent least squares estimation (LSE) (i.e., common standard) especially for kinetic microparameters.

    Purpose: We developed and investigated a method to estimate microparameters enabling parametric imaging, by focusing on general image qualities, overall visibility, and tumor detectability, beyond the common standard framework for fitting of data and parameter estimation.

    Methods: Our parameter estimation method, denoted parameter combination-driven estimation (PCDE), has two distinctive characteristics: 1) improved probability of having one-on-one mapping between early and late dynamics in TACs (the former missing from typical protocols) at the cost of the precision of the estimated parameter, and 2) utilization of multiple aspects of TAC in selection of best fits. To compare the general image quality of the two methods, we plotted tradeoff curves for the normalized bias (NBias) and the normalized standard deviation (NSD). We also evaluated the impact of different iteration numbers of the ordered-subset expectation maximization (OSEM) reconstruction algorithm on the tradeoff curves. In addition, for overall visibility, a measure of the ability to identify suspicious lesions in WB (i.e., global inspection), the overall signal-to-noise ratio (SNR) and spatial noise (NSDspatial) were calculated and compared. Furthermore, the contrast-to-noise ratio (CNR) and relative error of the tumor-to-background ratio (RETBR) were calculated to compare tumor detectability within a specific organ (i.e., local inspection). Furthermore, we implemented and tested the proposed method on patient datasets to further verify clinical applicability.

    Results: With five OSEM iterations, improved general image quality was verified in microparametric images (i.e., reduction in overall NRMSE: 57.5, 71.1, and 56.1 [%] in the K1, k2, and k3 images, respectively). The overall visibility and tumor detectability were also improved in the microparametric images. (i.e., increase in overall SNR: 0.2, 4.1, and 2.4; decrease in overall NSDspatial: 0.2, 5.4, and 4.1; decrease in RETBR for a lung tumor: 17.5, 82.2, and 68.4 [%]; decrease in RETBR for a liver tumor: 255.8, 1733.5, and 80.3 [%], in K1, k2, and k3 images, respectively; increase in CNR for a lung tumor: 1.3 and 1.0; increase in CNR for a liver tumor: 1.2 and 9.8, in K1 and k3 images, respectively). In addition, with five OSEM iterations, the differences in macroparametric images of the two methods were insignificant (i.e., overall NRMSE difference was within 10 [%]; differences in overall SNR, overall NSDspatial, and CNRs for both tumors were within 1.0; and the difference in RETBR was within 10 [%] except for an exceptional case). For patient study, improved overall visibility and tumor detectability were demonstrated in micoparametric images.

    Conclusions: The proposed method provides improved microkinetic parametric images compared to common standard in terms of general image quality, overall visibility, and tumor detectability.

  • XML | PDF | downloads: 105 | views: 161 | pages:

    Purpose: This topic focuses on a comprehensive evaluation of various diffusion tensor imaging (DTI) estimation methods, such as linear least squares (LLS), weighted linear least squares (WLLS), iterative re-weighted linear least squares (IRLLS) and non-linear least squares (NLS). The article will explore how each method performs in terms of accuracy, efficiency in estimating the diffusion tensor and robustness against noise.

    Materials and Methods:  The study compares the methods using simulated diffusion-weighted MRI data. Time complexity and performance were evaluated across key metrics such as TRMSE, RMSE, MSD and ΔSNR.

    Results: The results of the study demonstrate that LLS and IRLLS consistently outperform other methods in terms of TRMSE, MSD and SNR, particularly in high-noise scenarios. NLS performs best in reducing RMSE but high noise causes it to fit to noise, so it is not robust. WLLS showed the weakest performance across all metrics.

    Conclusion: LLS and IRLLS provide a balance between accuracy and computational efficiency, making them practical for use in DTI analysis.

  • XML | PDF | downloads: 238 | views: 172 | pages:

    Purpose: One of the increasing neurological disorders is Alzheimer's, which progressively weakens brain cells and leads to critical cerebral impairments like memory loss. The present diagnostic techniques comprise PET scans, MRI scans, CSF biomarkers, and others that frequently need manual power and time-consuming process which might not offer appropriate results. This emphasizes the requirement for more precise and potential diagnostic solutions.

    Materials and Methods: The proposed model utilizes AI-based Deep Learning (DL) techniques for effective multi-class classification of AD such as Early Mild Cognitive Impairment (EMCI), Late Mild Cognitive Impairment (LMCI), Mild Cognitive Impairment (MCI), Cognitive Normal (CN) and Alzheimer’s Disease (AD) using Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. The proposed study utilizes Tri Branch Attention Network (TBAN) with Unified Component Incorporation (UCI) by capturing both spatial and channel attention information, by replacing the Squeeze and Excitation (SE) component in the conventional EfficientNet model and helps in addressing the concerns associated to imbalanced spatial feature distribution in images. Further, the incorporation of the proposed TBAN module in the Conv Layer helps, not only in terms of capturing the long-term dependence between the different channels of the network but also helps in retaining the specific location information to enhance the performance of the model. Similarly, the proposed UCI which is used in the MBConv layer deals with regularization, as the accuracy of the model can be dropped due to unbalanced regularization, hence the incorporation of UCI advocates strong regularization for combatting the concerns associated with overfitting and aids in providing better accuracy.

    Results: Eventually, the proposed framework is evaluated with different metrics and the accuracy value obtained by the proposed model is 0.95. Likewise, precision, recall, and F1 scores gained by the proposed work are 0.95, 0.95, and 0.95.

    Conclusion: The proposed research resolves significant gaps in the present diagnostic practices by implementing emerged AI techniques to improve the efficacy and accuracy of Alzheimer's diagnosis by medical imaging. Through enhancing the abilities of early detection, this proposed model holds the prospective to majorly affect treatment tactics for people affected with Alzheimer's. Finally, it led to better patient consequences and life quality.

  • XML | PDF | downloads: 62 | views: 108 | pages:

    Purpose

    In epilepsy pre-surgical evaluations, semi-automated quantitative analysis of 18F-FDG brain PET images is a valuable adjunct to visual assessment for localizing seizure onset zones. This study investigates how adjusting image reconstruction parameters can enhance the accuracy of these quantitative results.

    Materials and Methods

    A total of 234 reconstruction parameters were applied to 18F-FDG brain PET images of a focal epilepsy patient. The parameters encompassed the 3D-Ordered-Subset Expectation Maximization image reconstruction method with resolution recovery (HD) and without (non-HD), various numbers of iterations and subsets (#it×sub), pixel sizes, and Gaussian filters. The accuracy errors were determined using the relative difference percentage (RD%) in measured SUVmax and the absolute Z-scores compared to reference values derived from the normal database reconstruction set serving as the benchmark.

    Results

    The study revealed that reconstructed images with 5mm or 8mm Full width at half maximum (FWHM) Gaussian filters yielded RD% values above 5% for SUVmax and Z-scores, indicating potential inaccuracy with higher values of post-smoothing filters. The recommended reconstruction sets with RD% values below 5% for both HD and non-HD images were those with a 3mm FWHM Gaussian filter and higher (#it×sub), specifically (5×21, 8×21), (5×21, 6×21), and (7×21, 8×21) for pixel sizes of 1.01 mm, 1.35 mm, and 2.03 mm, respectively.

    Conclusions

    The findings underscore the significant impact of altering the image reconstruction sets on the SUVmax and Z-scores. Furthermore, the inconsistent fluctuations of Z-scores emphasize the importance of using standard image reconstruction sets to ensure accurate and reliable quantitative outcomes in epilepsy pre-surgical evaluations.

  • XML | PDF | downloads: 66 | views: 74 | pages:

    Purpose: The posterior oblique beams are increasingly common in radiotherapy techniques. The radiation beams traversing through the treatment couch would be attenuated and cause under-dosage in the tumor region. The attenuation of an IGRT carbon fiber Couch for different angles, energies, field sizes, measurement points, couch regions, and the ability of the Eclipse treatment planning system in dose prediction was investigated.

     

    Materials and Methods: Vital Beam linear accelerator and Exact IGRT couch top from Varian were applied. At first, the couch coefficient was used to find the most attenuation angle. Then, at the most attenuation gantry angle, the attenuation measurements were performed in three measurement points of an inhomogeneous thoracic phantom using a farmer ionization chamber for three energies with six field sizes in three regions of an IGRT couch.

     

    Results: In three regions of the IGRT couch and the angle of 130˚, the photon beam was most attenuated. The most significant difference between calculated and measured point doses was 1.855%.

     

    Conclusion: The IGRT treatment couch in posterior oblique gantry angles decreased the dose in the measurement points due to gantry angle, field size, energy, and couch region. The Eclipse treatment planning system can sufficiently predict the tumor dose distribution.

  • XML | PDF | downloads: 65 | views: 75 | pages:

    Introduction

    Laryngeal cancer is a critical health issue, often treated using advanced radiation therapy techniques such as Intensity-Modulated Radiation Therapy (IMRT). The gamma index is a widely used metric for quality assurance in radiotherapy, assessing the agreement between planned and delivered dose distributions.

    Objective

    This study aims to evaluate the feasibility and accuracy of laryngeal IMRT treatment plans using three gamma analysis algorithms and varying evaluation parameters, including dose difference (DD%), distance-to-agreement (DTA).

     

    Result

    Gamma passing rates (GPR) for the laryngeal IMRT plans demonstrated high accuracy, with over 90% of pixels passing the criteria in most cases. Composite gamma analysis showed 53.89% of pixels meeting both DD and DTA criteria simultaneously, while individual evaluation revealed the impact of stricter thresholds on GPR. Subtraction analysis identified dose discrepancies, emphasizing the need for accurate calibration.

    Conclusion
    This study highlights the effectiveness of gamma analysis in ensuring the accuracy of IMRT treatment plans for laryngeal cancer. The findings underscore the importance of rigorous PSQA, parameter optimization, and advanced algorithms to enhance treatment precision.

  • XML | PDF | downloads: 92 | views: 134 | pages:

    Purpose: In the era of digital medicine, medical imaging serves as a widespread technique for early disease detection, with a substantial volume of images being generated and stored daily in electronic patient records. X-ray angiography imaging is a standard and one of the most common methods for rapidly diagnosing coronary artery diseases. Deep neural networks, leveraging abundant data, advanced algorithms, and powerful computational capabilities, prove highly effective in the analysis and interpretation of images. In this context, Object detection methods have become a promising approach.

    Materials and Methods: Deep learning-based object detection models, namely RetinaNet and EfficientDet D3 were utilized to precisely identify the location of coronary artery stenosis from X-ray angiography images. To this aim, data from about a hundred patients with confirmed one-vessel coronary artery disease who underwent coronary angiography at the Research Institute for Complex Problems of Cardiovascular Diseases in Kemerovo, Russia was utilized.

    Results: Based on the results of experiments, almost both models were able to accurately detect the location of stenosis. Accordingly, RetinaNet and EfficientDet D3 detected the location of false stenotic segments with a probability of more than 93% in the coronary artery.

    Conclusion: It can be stated that our proposed model enables automatic and real-time detection of stenosis locations, assisting in the crucial and sensitive decision-making process for healthcare professionals.

  • XML | PDF | downloads: 76 | views: 75 | pages:

    Objective: In Morocco, significant disparities exist in observing national radiation protection standards, particularly in conventional radiology for pediatric patients. This cross-sectional study aims to establish Moroccan diagnostic reference levels (DRLs) for pediatric thorax radiography.

    Methods: Thorax radiography examinations of  208 pediatric patients (newborns up to 18 years old) from four public hospitals in Morocco were evaluated. Patient demographics (age, gender, weight), and scan parameters were recorded to calculate radiation doses using CALDOSE_X 5.0 software, quantifying entrance surface air kerma (ESAK) in mGy. The study samples were divided into five age groups (age <1 month, 1 month ≤ age< 4 years, 4 years ≤ age< 10 years, 10 years ≤ age< 14 years, and 14 years ≤ age < 18 years). The third quartile (P75) of calculated ESAK in mGy and kinetic energy released per unit mass (KERMA)-area product (KAP), in mGy.cm² for each group were analyzed. Statistical analysis was performed using SPSS version 21.

    Results: The P75 values for ESAK (mGy) and KAP (mGy.cm²) diagnostic reference levels across age groups were: 0.61, 0.69, 0.68, 0.82, and 1.29 for ESAK, and 350.25, 566.07, 499.14, 950.62, and 1816.06 for KAP. The calculated regional DRLs for pediatric thorax radiography exceeded the published values for thorax protocols in some European countries. The irradiated surfaces significantly impacted the received doses of patients up to 10 years old (p-values of 0.004, 0.000, and 0.001).

    Conclusions: Adapting the irradiation surface to patient morphology is crucial, requiring precise control over exposure factors, radiation field size, and protocol selection.

  • XML | PDF | downloads: 127 | views: 177 | pages:

    Purpose: This study aims to explore the effect of mean dose constraint in optimization shells on the reduction of normal lung dose in lung SBRT plans.

    Materials and Methods: This study investigated 28 VMAT-based lung SBRT plans optimized with three artificial shells, which were re-generated with same setup and an additional mean dose constraint besides the maximum dose limit.  Dosimetric measurements of target volume and organs at risk (OARs) were compared between the original plans and re-generated ones using Wilcoxon signed-rank test at 5% level significance (two-tailed).

    Results: Replanning resulted in slight improvements in some parameters, such as R50% and Gradient measure (GM) respectively reduced by 1.3% and 1.0% with p<0.05, but slight increases in others, such as D2cm and Maximum target dose. However, those increases were not statistically significant. The Conformity Index (CI) and V105% values remained largely unchanged after replanning. The parameters for dose deposited in normal lung tissue showed statistically significant reductions ranging from 1.0% to 1.7%. In addition, the mean dose to the spinal cord, esophagus, and skin were slightly reduced, but the mean dose to the heart showed a slight increase.

    Conclusion: The study found that adding mean dose constraints to optimization shells in lung SBRT plans can reduce normal lung dose while maintaining dose conformity to the target. However, there may be slight changes in some OARs such as the spinal cord, esophagus, and skin. These changes were not statistically significant.

     

  • XML | PDF | downloads: 75 | views: 274 | pages:

    Purpose: Reinforcement Learning (RL) is attracting great interest because it enables systems to learn by interacting with the environment. This study aims to enhance the RL algorithm to become more similar to human motor control by combining it with the Non-negative matrix factorization (NMF) method.

    Materials and Methods: In the study, the signals recorded from six muscles involved in arm-reaching movement without carryinga certain weight.were pre-processed, and the optimal number of synergy patterns was extracted using NMF and the Variance Account For (VAF) methods. This, in turn, contributes to reducing the calculations. Subsequently, the robustness of the two-link arm model with six muscles was evaluated under various noise levels applied to the action coefficient matrix. Finally, the average synergy pattern was done on the mentioned arm model, and the RL algorithm controlled it by producing the action coefficient matrix.

    Results: The average VAF% was 97.25±2.0%, and the number of synergies was four. The tip-of-the-arm model was able to reach the target after an average of 100 episodes.

    Conclusion: The results indicated that the similarity in the extracted synergy patterns helps to model a system that is more similar to motor control. Additionally, the results of the synergistic patterns revealed that the two-link arm model with six muscles was suitable for the model. While controlling the model with the RL algorithm, the desired end-point position and path were achieved.

  • XML | PDF | downloads: 34 | views: 71 | pages:

    Purpose: Respiratory infectious diseases often manifest as ground-glass opacity (GGO) or consolidation signs in the lungs. Artificial intelligence (AI) assisted systems utilizing data mining algorithms such as Waikato Environment for Knowledge Analysis (Weka) can be used for the detection and segmentation of these signs. In this study, we propose using Weka as a comprehensive data mining and machine learning tool to develop the most accurate models for detecting lung signs in chest CT images of patients with respiratory infectious diseases.

    Materials and Methods: First, we mannually selected specific signs from chest computed tomography (CT) images from 600 cases using the Graphical User Interface (GUI) Weka plugin. We then trained the random forest algorithm based on different features and presented the best-combined model obtained for the automatic detection of the mentioned signs. Lastly, the model performance was evaluated with different metrics.

    Results: Our findings indicate that the hybrid texture description features, including “Structure”, “Entropy”, “Maximum”, “Anisotropic” and “Laplacian” available in Weka, demonstrated the lowest Out-of-Bag (OOB) error rate, highest area under the ROC curve (AUC) value of 0.992 and accuracy of 98.1%.

    Conclusion: By leveraging the combination of Weka features, we have successfully developed models for the detection and segmentation of lung signs associated with infectious diseases, from chest CT images. These findings contribute to the field of medical image analysis and hold promise for improving the diagnosis and treatment outcomes of patients with respiratory infectious disorders.

  • XML | PDF | downloads: 96 | views: 143 | pages:

    Background: There are different types of hair loss known as alopecia. Various methods for treating Androgenetic alopecia (AGA) are being investigated in the preclinical stage using C57BL/6 mice affected by this condition.

    Objective: The purpose of the study was to evaluate the effects of dihydrotestosterone (DHT) on the skin layers of male C57BL/6 mice, simulating a model of AGA using high-resolution ultrasound imaging.

    Methods: Seven-week-old male C57BL/6 mice were selected for the study. To induce AGA, three of the mice received intraperitoneal injections of DHT at a dosage of 1 mg per day for five consecutive days, a known method for provoking hair loss via androgenic pathways. High-resolution ultrasound imaging at 40 and 75 MHz frequencies allowed detailed observation of skin layer changes due to DHT administration. Shear modulus and Young modulus were extracted using a dynamic loading throughout ultrasonography with 40 MHz frequency. Both control and AGA-affected groups were evaluated through structural imaging and were compared with histopathological results. Tissues were stained with Hematoxylin-Eosin (H&E) and so Trichrome Mason

    Results: Ultrasound imaging revealed that the epidermis thickness was 0.22 mm in the control group compared to 0.31 mm in the AGA group at 40 MHz. At 75 MHz, these measurements were 0.10 mm for the control group and 0.20 mm for the AGA group. The dermis thickness measurements showed 0.30 mm in the control group and 0.70 mm in the AGA group at 40 MHz, while at 75 MHz, the thicknesses were 0.40 mm for the control group and 0.70 mm for the AGA group. H&E staining results aligned with these ultrasound findings, confirming increased epidermal and dermal thicknesses in the AGA group. Elasticity metrics indicated a shear modulus of 1.19 kPa for the AGA group and 6.70 kPa for the control group, while Young’s modulus demonstrated values of 6.47 kPa for the control group and 22.69 kPa for the AGA group. Further corroboration of altered tissue elasticity was provided by Trichrome staining, indicating significant changes in skin structure.

    Conclusion: The administration of DHT in the C57BL/6 mice model leads to notable structural changes in skin layers, evidenced by an increased thickness of both the epidermis and dermis, along with diminished mechanical properties of skin elasticity.

     

  • XML | PDF | downloads: 81 | views: 140 | pages:

    Discovering the functional connections between human body parts can be beneficial for better control of brain-computer interface (BCI) systems. The brain, as the decision-making organ, controls all body parts to perform activities. In this study, the main objective is to investigate the relation of hand muscles and the effect of each muscle on another using electroencephalogram (EEG) signals. To this end, brain connections are extracted as influential components, and a convolutional network is used to calculate the effect of EEG signals on the connections between hand muscles. The relationships between EEG signal channels are computed using correlation methods, coherence, directed transfer function, Granger causality, and phase delay index. The relationships between electromyogram (EMG) signal channels are also calculated using Granger causality. Signals are recorded in two phases: rest and activity, and ultimately, the EMG signal activity is estimated solely using EEG signals.Simulation results estimate the correlation between the estimated and actual patterns for test data to be around 0.949, indicating a high correlation between the estimated outputs and actual values. According to the researches reviewed, there has been a lack of investigation into the EEG signal graph with muscular Discussions and its correlation with the EMG signal. Given that muscle actions necessitate input from multiple brain regions, it is anticipated that several areas of the brain will be engaged during this process. Therefore, employing graph theory may yield more profound insights into this interaction than traditional approaches, such as analyzing brain connectivity.

     

    Keywords

    Vital signal connections, brain-computer interface, regression, convolutional networks

     

     

  • XML | PDF | downloads: 87 | views: 80 | pages:

    Purpose: The pineal gland (PG) is a structure located in the midline of the brain, and is considered the main part of the epithalamus. There are reports on the role of this area for brain function by hormone secretion, as well as few reports on its role in brain cognition. However, little knowledge is available on the structural and functional connectivity of the PG with other brain regions, as well as its age and gender associations.

    Materials and Methods: In this work, we used the diffusion and resting-functional MRI data of 282 individuals, in the age range of 19 to 76 years old. All participants were checked for their medical and mental health by a general practitioner, and the MRI data were collected using a 3 Tesla scanner. The diffusion data were analyzed using the Explore DTI software (version 8.3), and the fMRI data were analyzed using the CONN toolbox (version 18.0).

    Results: Two white matter tracts connecting the PG Body to PG Roots and PG to Pons were extracted in this study. The mean FA of the two tracts were 0.26 ± 0.06 and 0.24 ± 0.08, respectively. Neither the FA values of the tracts nor their lengths, showed any associations with age and gender; However, with increasing age, the likelihood of successfully identifying the PG-Pons tract decreased. In functional connectivity analysis, five brain regions showed positive connectivity with the PG, including the superior temporal gyrus, middle temporal gyrus, brain stem, vermis, and the subcallosal cortex, and 25 regions showed negative connectivity. These connectivities did not show an association with gender, but some associations with age were observed.

    Conclusion: This study is novel in estimating the functional and structural connectivity of the PG with other brain areas, and also in assessing the association of these connections with age and gender, which could help to increase our knowledge on the functional neuroanatomy of the pineal gland.

  • XML | PDF | downloads: 40 | views: 132 | pages:

    Purpose: There is a known decline in brain volume with age, impacting cognitive health and increasing the risk of diseases such as dementia and Alzheimer's. Physical activity has been shown to have positive effects on brain structure and cognitive function with aging. Still, the association between motor function and brain volume in young adults remains unclear.

    Materials and Methods: This study utilized high-resolution T1-weighted MRI images and motor function test results from 1082 healthy young adults aged 22-37, sourced from the Human Connectome Project Young Adult (HCP-YA). Motor functions were assessed using four tests: Endurance, Gait Speed, Dexterity, and Strength. Correlation analysis and multiple linear regression models were used to evaluate the association between motor functions and brain volumes, adjusting for demographic variables and body mass index (BMI).

    Results: Significant positive correlations were found between Endurance and Strength tests with multiple brain volumes, while Dexterity test showed negative correlations. No significant correlations were observed for the Gait Speed test. Multiple linear regression analyses revealed that total brain (β = 0.045, SE = 0.020), total gray matter (GM) (β = 0.035, SE = 0.016), left white matter (WM) (β = 0.058, SE = 0.025), right WM (β = 0.056, SE = 0.025), total WM (β = 0.057, SE = 0.025), and left accumbens (β = -0.072, SE = 0.031) volumes were significantly associated with motor function scores (p < 0.05).

    Conclusion: Physical fitness, as measured by motor function tests, is significantly associated with brain structural integrity in young adults. These findings highlight the potential importance of physical activity in maintaining brain health, which could inform strategies to promote active lifestyles and prevent neurodegenerative diseases.

  • XML | PDF | downloads: 224 | views: 194 | pages:

    Purpose: To evaluate the antibacterial efficacy of different concentrations of natural cold-pressed flaxseed oil when used as an intra-canal medicament against Enterococcus faecalis.

    Materials and methods: The antibacterial efficiency of flaxseed oil against E. faecalis was assessed in two sections using different concentrations. Both sections were compared to calcium hydroxide and tricresol formalin. The first section was on the agar, using two methods: agar diffusion and vaporization. The second section is on the extracted roots contaminated with E. faecalis for 21 days to form biofilms, confirmed by SEM examination, and includes two different methods: direct contact and vaporization. Bacterial swabs were collected before and after medication throughout two-time periods (3 and 7 days). The canal contents were swabbed using paper points kept for 1 minute in the root canal, and the collected samples were diluted and cultivated on plates containing blood agar. Survival fractions were determined by calculating the number of colony-forming units on culture medium after 24 hours.

    The oil's minimum inhibitory concentration (MIC) and minimal bactericidal concentration (MBC) against E. faecalis were determined using the micro-broth dilution method.

    The active components in flaxseed oil were evaluated using GC-MS and HPLC analysis.

    Results: The tested oil demonstrated antibacterial efficacy against E. faecalis in different concentrations and levels. The MBC was 22.5 µl/ml. Tricresol formalin induced powerful antibacterial action, while calcium hydroxide exhibited less effective antibacterial action as compared to flaxseed oil. Flaxseed oil contains numerous biologically active components.

    Conclusion: Flaxseed oil exhibits strong antibacterial activity when evaluated against E. faecalis biofilm that has been cultivated in root canals.

  • XML | PDF | downloads: 245 | views: 232 | pages:

    Background: Patients with diabetes are more likely to develop polyhydramnios. The rate of Polyhydramnios among diabetic patients is ascending when contrasted with non-diabetic patients.

    Objective: To compare the amniotic fluid index of diabetics and non-diabetics using sonography.

    Methods: 200 people participated in a case-control study, 100 of whom were diabetic and the other 100 were non-diabetic. Toshiba XARIO XG was used in the study at the university ultrasound clinic in Green Town. It has a convex probe of 3.5-7.5 MHz frequency. All patients with diabetes and gestational diabetes of age 18-45 years are included during 2nd & 3rd trimesters. Any underlying pathologies like hypertension, multiple gestations were excluded in this study. SPSS version 25.0 was utilized for the analysis of the data.

     Results: The mean amniotic fluid index in diabetics and non-diabetics was 21.19 and 13.20 respectively.  In both diabetics and non-diabetics, the amniotic fluid index was found to be statistically significant (p=0.000). The chi-square analysis shows a significant association between AFI category and diabetes status. With the Diabetic group having a higher proportion of cases with Polyhydramnios AFI category and a lower proportion of cases with Normal AFI category compared to the Non-diabetic group. The mean estimated fetal weight in diabetics and non-diabetics was 1341.64 and 1372.53 respectively. Result shows that there was no significant difference in the estimated weight of the fetus between diabetic and non-diabetic females (p=0.088).

    Conclusions: Study concluded that diabetes during pregnancy is associated with a significant increase in amniotic fluid levels, leading to a higher likelihood of polyhydramnios.

  • XML | PDF | downloads: 113 | views: 127 | pages:

    In the past, heavy drinking was often linked to fatty liver. The prevalence of non-alcoholic fatty liver disease (NAFLD), which affects people who do not consume alcohol, has garnered a lot of attention in the last 20 years. Nearly all fatty liver diseases are now the leading cause of liver disease in industrialized nations. Fatty liver has traditionally been defined as having a hepatic fat content of more than 5% of liver weight. Several medical issues, including those caused by medications, poor diet, and infections, may lead to fatty infiltration of the liver. Modern scientific understanding, however, attributes fatty liver in most individuals to either being overweight or obese or to drinking too much alcohol. This research proposes a stacked ensemble approach to detect NAFLD efficiently and achieves 95.9% correct classification accuracy. It also compares the proposed method with other basic and boosting machine learning approaches. To improve machine learning for trustworthy and reliable NAFLD screening and diagnosis, we apply explainable AI methods to the ensemble model to identify the most influential features and patterns for NAFLD predictions.

  • XML | PDF | downloads: 22 | views: 57 | pages:

    Brain Stroke is defined as the sudden death of the brain cells due to lack of blood circulation and form a lesion/mass in the cerebral parenchyma and led to loss of speech, weakness, or paralysis of one side of the body. If the diseases are detected in early stage it will be cured. The existing method does not provide efficient accuracy. In this paper two type of brain stroke lesions are classified such as infarct (lack of blood supply) and Haemorrhagic stroke (breaking of blood vessel). In this manuscript, Automated Brian Stroke Lesion Detection and Classification using Non-Contrast Computed Tomography and Dual Stage Deep stacked auto-encoder (DS-DSAE) with an Evolved Gradient Descent optimisation (EGDO) method is proposed to detect the brain stroke in early stage with great accuracy. In this the input image are taken from the slice level of Non-Contrast CT images dataset. Then the images are pre-processed, images are enhanced by removing skull regions, then the rotations are performed by mid-line symmetry process. Then the ROI region is extracted using wavelet domain. Then the images are classified using DS-DSAE and the weight parameters of the DS-DSAE are tuned using EGDO algorithm. Then the abnormal portions of the brain stroke lesions are detected and classified as acute infarct, chronic infarct and ischemic stroke, haemorrhagic stroke, and normal. The objective function is to increase the accuracy by decreasing the computational complexity.  The simulation process is executed in the MATLAB platform.  The proposed CLACHE-IDFNN-MBO attains higher accuracy 99.56%, High Precision 88.74%, High F-Score 92.5%, High Sensitivity 94.23%, High  Specificity 91.45%, lower computational time 0.019(s) and the proposed method is compared with the existing methods such as Fractional Order BAT Algorithm Fuzzy C with Delaunay triangulation (DT), social group optimization (SGO) and Fuzzy-Tsallis entropy (FTE), moth-flame algorithm (MFOA) and Kapur’s thresholding  respectively.

  • XML | PDF | downloads: 52 | views: 19 | pages:

    Purpose: Magnetoencephalography (MEG) is a brain imaging method with a high temporal-spatial resolution by recording neural magnetic fields. The data quality of this imaging method is reduced for reasons such as the failure of one or more sensors. This study aims to explore the efficiency of the various data reconstruction techniques in magnetoencephalography for the retrieval of poor-quality channels.
    Materials and Methods: We compared three surface reconstruction methods (Mean, Median, and Trimmed mean), two partial differential equations (modified Poisson and Diffusion equation), and a Finite Element-based interpolation method using data from 11 young adults (aged 30±12). Each technique was assessed in terms of time taken for reconstruction, R-squared, root mean squared error (RMSE), and signal-to-noise ratio (SNR) compared to a reference signal. Statistical tests (P-value < 0.05) were used to analyze the relationships between the mentioned evaluation criteria. Generalized Linear Models revealed that surface reconstruction methods and finite-element interpolation outperformed partial differential equations.
    Results: The Trimmed mean method achieved the highest R-squared (0.882 ± 0.0610) and lowest RMSE (0.0155 ± 0.00904) with a reconstruction time of 9.5154 microseconds for a 500 milliseconds epoch of a magnetoencephalography channel data.
    Conclusion: The surface reconstruction methods can recover the noisy or lost signal in magnetoencephalography with a suitable error and required time.

  • XML | PDF | downloads: 54 | views: 16 | pages:

    Purpose: The hippocampus is a crucial brain region responsible for memory, spatial navigation, and emotion regulation. Precise hippocampus segmentation from Magnetic Resonance Imaging (MRI) scans is vital in diagnosing various neurological disorders. Traditional segmentation methods face challenges due to the hippocampus's complex structure, leading to the adoption of deep learning algorithms. This study compares four deep learning frameworks to segment hippocampal parts, including concurrent, separated, ordinal, and attention-based strategies.

    Materials and Methods: This research utilized 3D T1-weighted MR images with manually delineated hippocampus head and body labels from 260 participants. The images were randomly split into five folds for experimentation, each time one of those designated as the test set and the rest as the training set.

    Results: The findings indicate that both the concurrent and separated frameworks perform better than the ordinal and attention-based frameworks regarding the Dice and Jaccard coefficients. In head segmentation, the separated framework had a Dice similarity of 0.8748, a Jaccard similarity of 0.7794, and a Hausdorff distance of 5.4160. In body segmentation, the concurrent framework had a Dice similarity of 0.8616, a Jaccard similarity of 0.7591, and a sensitivity of 0.8437. Statistical results from the one-way ANOVA test showed a significant difference in performance for the body part (P-value=0.008), but not for the head region (P-value=0.652) between concurrent and separated frameworks. Comparing the concurrent with ordinal and attention-based frameworks showed a significant difference in both body and head regions (P-value<0.001 for both comparisons).

    Conclusion: Researchers must consider the differences between various frameworks while selecting a segmentation method for their specific task. Understanding the strengths and weaknesses of every framework is essential for deciding on the top-rated segmentation approach for precise applications.

Literature (Narrative) Review(s)

  • XML | PDF | downloads: 133 | views: 163 | pages:

    Purpose: The objective of this paper is to review the non-invasive methods for ICP monitoring and the research conducted in the field.

    Materials and Methods: A comprehensive literature search was conducted on NIH and PubMed, and papers highlighting the newer methods used in Intracranial Pressure monitoring were reviewed and the related data was included in the paper.

    Results:  The prominent methods of non-invasive ICP monitoring reviewed were: Imaging (CT and MRI), Electroencephalogram (EEG), Near-Infrared Spectroscopy (NIRS), Optic Nerve Sheath Diameter (ONSD), and Transcranial Doppler (TCD) Ultrasound.

    Conclusion: While invasive methods for ICP monitoring are preferred over non-invasive methods in a clinical setting, with the intraventricular catheter being the gold standard for ICP monitoring, many non-invasive methods for ICP monitoring are considered, especially in settings where invasive ICP monitoring is not possible. The use of non-invasive methods represents an advancement in the field of ICP monitoring. Although not very well known in a clinical setting, non-invasive methods offer more safety and carry a lesser risk of infection.

  • XML | PDF | downloads: 137 | views: 253 | pages:

    Background: This review aims to synthesize current literature on recent advances in the diagnosis and treatment of Brain and spinal cord injuries (SCIs), focusing on molecular imaging, cell therapy, brain-computer interfaces (BCIs), and craniosacral therapy (CST).

    Methods: A systematic search was conducted in PubMed/MEDLINE, Scopus, Web of Science, Cochrane Library, and Google Scholar to identify relevant articles published between 2015 and 2025. Keywords included "Brain Injury," "Spinal Cord Injury," "Molecular Imaging," "Cell Therapy," "Brain-Computer Interface," and "Craniosacral Therapy."

    Results: Molecular imaging techniques, such as fMRI, DTI, and PET, enhance diagnostic accuracy by visualizing neural activity and structural integrity. Cell therapy, particularly with mesenchymal stem cells (MSCs), shows promise in promoting axon regeneration and reducing inflammation. BCIs offer potential for restoring motor function and enhancing neural plasticity. The evidence for CST is mixed, with some studies suggesting benefits in pain relief and cognitive improvement, while others raise concerns about methodological limitations.

    Conclusion: Recent advances in molecular imaging, cell therapy, and BCIs offer promising avenues for improving the diagnosis and treatment of BSCI. However, further rigorous research is needed to validate the efficacy of these approaches and to address ethical considerations. While CST has gained attention as a complementary therapy, more high-quality studies are required to determine its effectiveness. This review highlights the need for interdisciplinary collaboration to translate scientific discoveries into clinical practice and to improve the quality of life for individuals affected by BSCI.

  • XML | PDF | downloads: 204 | views: 475 | pages:

    Abstract

    Objective: To evaluate the effectiveness of polymer-based shields containing boron compounds for radiation protection in medical centers, focusing on their performance against neutron and gamma radiation.

    Methods: A comprehensive literature review was conducted using databases including PubMed, Scopus, Web of Science, and Embase. Studies published from 2010 to February 2025 were included. The search strategy employed keywords related to polymer-based shields, boron compounds, and radiation protection in medical settings.

    Results: Boron-containing polymers demonstrated significant potential for radiation shielding, particularly against neutrons. Nanocomposites incorporating high-Z elements showed improved gamma radiation attenuation. Hexagonal boron nitride (h-BN) nanocomposites exhibited superior neutron absorption properties. Epoxy-based composites with various nanoparticles showed enhanced protection against both neutron and gamma radiation. Recycled high-density polyethylene (R-HDPE) composites containing gadolinium oxide demonstrated promising thermal neutron shielding capabilities.

    Conclusion: Polymer-based shields containing boron compounds offer lightweight, flexible, and effective alternatives to traditional shielding materials. These materials show particular promise in medical applications, potentially improving safety for both patients and healthcare providers. However, challenges remain in optimizing material composition, thickness, and long-term stability for practical implementation in clinical settings.

Systematic Review(s)

  • XML | PDF | downloads: 232 | views: 280 | pages:

    Radiopharmaceuticals are combinations of two main components, a pharmaceutical component that targets specific moieties, and a radionuclide component that acts through spontaneous degradation for diagnostic, therapeutic purposes, or both simultaneously known as theranostics. By combining diagnostic and therapeutic methods, radiotheranostics play an important role in reducing radiation dosages for patients, increasing treatment effectiveness, controlling side effects, improving patient outcomes, and reducing overall treatment costs. Despite the diagnostic and therapeutic roles, radiopharmaceuticals are beneficial for assessing prognosis, disease progression and possibility of recurrences, treatment planning strategies, and assessing response to treatment. The most incredible role of radiopharmacy is establishing new radiopharmaceuticals to better target and tolerated agents for imaging and treatment in a clinic. These approaches are supported by nuclear medicine non-invasive procedures. It is crucial for radiopharmaceuticals that drug delivery occurs in a highly selective and sensitive manner to minimize the potential radiation risk to patients. This report will provide an overview of the recent progress in radiopharmaceuticals for diagnosis and therapy, including the latest radiotheranostic tracers, key concerns within the field, and future trends and prospects. Additionally, the available and useful radiopharmaceuticals are categorized into separate tables based on their specific characteristics. Presenting information in table format enhances organization and makes the data more understandable and accessible for users. This structured approach allows users to quickly locate relevant information, compare different radiopharmaceuticals, and grasp essential details at a glance. By utilizing tables, we ensure that critical information is not only easy to read but also effectively highlights the unique attributes of each radiopharmaceutical, ultimately improving the decision-making process for healthcare professionals.

  • XML | PDF | downloads: 87 | views: 45 | pages:

    Background: The aim of this study is to provide a comprehensive review of recent advances in the application of nanocarriers for targeted drug delivery and radiosensitization in cancer radiotherapy (RT), as well as to examine the challenges, solutions, and future prospects of this technology.

    Methods: A comprehensive literature search was conducted in PubMed, Scopus, Web of Science, and Embase, identifying 373 records. Following PRISMA guidelines, 36 studies met inclusion criteria focusing on functionalized nanocarriers in cancer RT. Data extraction covered nanoparticle types, functionalization, therapeutic payloads, cancer models, radiation modalities, and outcomes.

    Results: Forty studies were analyzed, categorized into iron oxide-based (10), silver (10), bismuth-based (7), graphene-based (4), gadolinium-based (4), and titanium-based (2) nanoparticles (NPs). Bismuth-based NPs (BiNPs) showed superior radiosensitization with sensitizer enhancement ratios (SERs) of 1.25–1.48 and up to 450% reactive oxygen species (ROS) increase in vivo, achieving ~70% tumor volume reduction without systemic toxicity. Silver NPs (AgNPs) demonstrated dose enhancement factors (DEF) rising from 1.4 to 1.9 and synergistic effects with docetaxel plus 2 Gy radiation. Iron oxide NPs functionalized with HER2 and RGD ligands reduced cell viability by 1.95-fold and achieved DEF of 89.1 in targeted systems. Gadolinium NPs reached SERs up to 2.44 at 65 keV, while graphene-based systems enhanced ROS production by 75.2%. Titanium-based NPs increased ROS levels 2.5-fold. Combination therapies integrating chemotherapeutics such as cisplatin and curcumin with nanocarriers yielded SERs up to 4.29. Radiation modalities included megavoltage X-rays (4–10 MV, n=24), synchrotron keV X-rays (n=2), gamma rays (0.38–1.25 MeV, n=3), and electron beams (6–12 MeV, n=3).

    Conclusions: Bismuth-based NPs represent the most promising radiosensitizers due to their high efficacy, safety, and clinical relevance, supporting their advancement toward clinical translation.

Case Report(s)

  • XML | PDF | downloads: 138 | views: 153 | pages:

    Purpose: This case report aimed to describe a treatment for severe inflammatory external root resorption (RR).

    Materials and Methods: A 13-year-old boy reported the avulsion of his upper left central incisor. The tooth had been avulsed four months prior and was replanted forty minutes later by an emergency service. The canal was thoroughly irrigated with 2% sodium hypochlorite and then filled with calcium hydroxide of a creamy consistency as an intracanal medication due to its antimicrobial properties, using lentulo spirals. The calcium hydroxide was left inside the canal for a month.

    Results: Following the diagnosis, treatment involved conventional endodontic therapy with calcium hydroxide dressings, and the root canal was definitively filled after radiographic control of the resorption. At the 6- and 12-month follow-ups, clinical and radiographic examinations revealed no signs or symptoms of any abnormalities. The resorption process had halted, and the radiograph showed the reappearance of the normal lamina dura, indicating successful therapy.

    Conclusion: This case report details the treatment of severe external inflammatory RR in a tooth undergoing orthodontic treatment. Successful tooth replantation depends on the effective implementation of the recommended therapy. However, when inflammatory external RR occurs, appropriate endodontic treatment is necessary to eliminate necrotic tissue and bacteria, along with the use of calcium hydroxide dressings.

Short Report(s)

  • XML | PDF | downloads: 42 | views: 49 | pages:

    Robotic surgery has transitioned from phenomenon to norm in the medical field, particularly in minimally invasive surgery. Robotic-assisted surgery offers greater precision, quicker recovery, and better patient outcomes, but issues like astronomical costs, technical issues, and ethical issues prevent its adoption. Robotic surgery's advantages—greater precision, less invasive procedures, and better clinical outcomes—are outlined here while addressing issues to its adoption. New technologies like AI integration, autonomous technology, and tele-surgery are revolutionary but will have to be accompanied by strong regulatory frameworks. Technologists', clinicians', and policy makers' collaboration is important to patient safety and equitable access as robot surgery advances.