6
Computational Intelligence Paradigms in Radiological Image Processing—Recent Trends and Challenges

Anil B. Gavade1, Rajendra B. Nerli2, Ashwin Patil3, Shridhar Ghagane4 and Venkata Siva Prasad Bhagavatula5

1Department of E&C, KLS Gogte Institute of Technology, Belagavi, Karnataka, India

2,3,4Department of Urology & Radiology, JN Medical College, KLE Academy of Higher Education and Research (Deemed-to-be-University), Belagavi, Karnataka, India

5Medtronic, Hyderabad, India

6.1 Introduction

A current boom in the modeling intelligence in algorithm to solve complex applications, this intelligence could be achieved through natural and biological intelligence, resulted a technology known as intelligent systems, these algorithms use soft computing tools. AI aim to make the machines and computers smarter, that make a computer to mimic like human brain in specific applications. AI algorithms are a blend of many research areas, such as biology, sociology, philosophy, and computer science. The purpose of AI is not to substitute human beings, instead offer us a more prevailing tool to support in our work, provide more computing ability, permitting them to exhibit more intelligent behavior.

6.2 Computational Intelligence

CI is a fragment of AI, which deals with study of adaptive mechanisms to enable intelligent behaviour in complex changing environment Figure 6.1. Shows all paradigm. Three main columns of CI are Fuzzy Systems, Neural Networks, and Evolutionary Computation, individual components have certain weakness and these could be improved by combining them tougher and they are referred as hybrid CI. Computers learn specific tasks from diverse forms of data or experimental observation, the ability to make computers learn and adapt is usually referred to as CI. It is considered to have the ability of computational adaptation, high computational speed, and fault tolerance. Computational adaptation means the ability of a system to adapt the changes happens in its input and output instances.

Figure 6.1 Computation intelligence paradigm.

6.2.1 Difference between AI and CI

AI deals with study of intelligent behavior exhibited by machines, mimicking natural intelligence similar to humans, AI aims to develop an intelligent machine which can think, act, and take decision similar to human. CI is the study of adapting and building intelligent behaviors based on a changing complex environment. CI is to recognize the computational model which make intelligent behavior of artificial and natural system in complex environment. AI and CI and have nearly similar type of goals but they are moderately different from each other.

6.2.2 Tools of Computational Intelligence

CI is the theory design, application, and development of a biologically motivated computational model. Traditionally it has three pillars, they are neural networks, fuzzy system, and evolutionary computational. It encloses computing models like artificial life, culture learning, social reasoning, and artificial hormone network. CI plays a major role in developing successful intelligent systems that includes games and a cognitive developmental system. In reality, some of the greatest successful AI systems depend on CI.

6.3 Radiological Image Processing Introduction

Information is knowledge and it can be represented in different forms, digital image or image is one of them, image is worth 1,000 words. Most commonly, humans depend on images perceived by our eyes more than any other sensory stimulus. Human eyes capture the image, the brain extracts information and interprets the objects. Today, most of the computer vision and machine learning applications are working on the similar lines. The drastic improvement and proliferation of radiological imaging has changed the medicine, allowed physicians and scientists to gather information by looking noninvasively into the human body. Medical imagery role has extended beyond visualization and inspection of anatomic structures, acting as new tool for surgical planning and simulation, intraoperative navigation, disease progress tracking, and radiotherapy planning etc.

Medical diagnostics, today, extensively depend on direct digital imaging techniques, almost all radiological modalities are available in the digital formats. Complexity of information differs from one modality to other that ranges from X-ray to MRI or an ultrasound image of an organ. Radiological application started with analog imaging modalities and today all are in digital format due to improvement in sensor and computation technology, almost all radiological application is in digital today. Medical images efficiently processed, objectively assessed and accessible at several places at same time through protocols and communication networks, such as Digital Imaging and Communications in Medicine (DICOM) and Picture Archiving and Communication Systems (PACS).

Digital image processing is an area of science and mathematics that manipulate the information present in the image, after manipulation the results could be input to several applications. We find digital image processing applications in almost several areas of engineering and science that ranges from space exploration, robotics to medical applications. Digital image is a two-dimensional function that is represented in a matrix, as rows and columns, the smallest entity is referred as pixel/pel. The stages in digital image processing involves acquisition of image using sensors such as a charge-coupled device (CCD), store and process using digital computer and finally display or print. Processing of radiological digital image involves image enhancement, image restoration, image analysis, image segmentation, image compression, image synthesis, and image quantification. Digital image is represented in different forms as black and white, grey scale, color and compressed images, image resolution, and type of image are directly correlated with data dimension. A medical image is commonly blurry and noisy due to acquisition stages, the degradation of the image is due to poor contrast or illumination and noise. Biomedical image analysis involves several stages and these stages are commonly requirements for subsequent stages, the final stage involves storage and decision making on capture image by medical practitioner or by machine to assist radiologist. These stages involved are image acquisition, image enhancement and restoration, image segmentation, image classification and quantification, to perform these algorithms more efficiently and precisely they need to be intelligent and fast in computation, CI is the best tool for these algorithms.

Radiological imaging is a vastly interdisciplinary field, which is combination of physics, medicine, computer sciences, and engineering. Primarily, radiological/biomedical imaging analysis is relevance application of digital image processing to medical or biological problems. However, in radiological image application a number of other fields play a vital role, like physiology, anatomy, and physics of the imaging modality and instrumentation, etc. The diagnosis or interference in medical application delivers the basis and motivation for biomedical image analysis. The choice of an imagery modality and of possible image processing stages depends on various medical factors, such as the type of tissue to be imaged or the suspected disease. Radiological imagery applications consist of four different stages, generally each stage is connected to the future subsequent stages, but at any required stage the algorithms allow human to human intervention to make decision or the results could be recorded. Imagery application have these minimum stages as image capturing (acquisition), image enhancement and restoration, image segmentations and classification, and finally image quantification shown in Figure 6.2.

Figure 6.2 Digital radiological image processing.

6.3.1 Image Acquisition

Image acquisition is the first step to form the 2-dimension object digitally, such as a suspicious tissue in a patient, spatial resolution is significant in biomedical imageries. Digital image is mapping of one or several tissue properties on the discrete quadrangular grid, these grids are pixels voxel (volume element) in 3-dimensional images, these discrete values are stored in memory as integer values. Each pixels or voxels have physical meaning, example endoscopic and photography image values that are relative to light intensities. Computed Tomography (CT) carries image values that are relative to local X-ray absorption. In MRI, the image values can represent a variety of tissue properties, liable on the acquisition sequence, essentially times proton density or local echo decay. The aim of the image acquisition step is to acquire contrast of the tissues, to analyze. The human eye is enormously superior at recognizing and classifying meaningful contrast, even in condition with poor signal-to-noise (S/N) ratio. Human vision permits immediate recognition and identification of spatial associations and makes it conceivable to notice subtle differences in density and to filter the feature from the noise. Experienced radiologist will have no trouble in identifying normal and abnormal tissues from the radiological digital images, but for computer it is a challenging task to do, next steps of image processing steps involve into the role and they are more significant for automation.

Image Enhancement: In computer vison and machine learning application, resolution of image is substantial for two purposes, first it improves the perception visibility features for more accurate and precise diagnosis by radiologist. Secondly, subsequent stage performs in best possible ways like segmentation, identification, classification, and quantification image data. Pixel value remapping, filtering (spatial and frequency), and few restoration methods are most commonly used enhancement operators. Histogram equalization and histogram stretching are two linear and nonlinear enhancement techniques. Filters attenuate/amplify relevant characteristics of pixels in an image, and filters make use of pixel vicinity. Filters work on pixels known as spatial domain filters and those which uses transforms such as discrete cosine transform, fourier transform and wavelet transform etc., which defines image data in terms of periodic components they all come in frequency domain filters. Filter play a vital role to improve smoothing an image, sharpening edges of objects in an image, and suppress periodic artifacts, to eliminate in an heterogenous background intensity distribution.

Image restoration and enhancement, they work on the similar line to improve the degradation in image. Image degradation occurs to due to misfocus of lenses, noise and due to motion blur of camera, degradation of image occurs at the acquisition process itself. To reverse the degradation of image we require filters and these filters work on reverse degradation process, such that errors (mean-squared error) is reduced between the restored image and idealized unknown image. Application of microscopy imaging inhomogeneous illumination play a significant role; this illumination may vary over time and this leads to introduction motion artifacts. While filters employed to overcome blur effect, they require well stable design measures, otherwise local contrast enhancement led to increase noise component and therefore decrease in the signal-to-noise ratio. Equally noise reducing filters negatively affect details of texture and edges, while these filters enhance the signal-to-noise ratio, details image may be lost and get blurred. Design of enhancement filter depend on the further steps of image processing stage. Enhanced image is always preferred may be for machine perception or human observation, human eyes distinguish particular objects from image even there is significant noise exist, but machines can’t do the same. Improving resolution of image is possible by several techniques. Super Resolution (SR) image and Image Fusion (IF) are commonly employed methods in number of applications.

6.3.2 Super Resolution Image Reconstruction

Super Resolution (SR)is a process in which perceptual quality of image is improved by replicating neighbouring pixels, zooming pixels, or by adding multiple frames of same image to be reconstructed. Machine learning approaches contribute a large number algorithm, which is shown in Figure 6.3. Currently, the efficiency and accuracy of SR techniques based on machine learning reached a stage where high-end computational resources high resolution image reconstruction is conceivable. The choice of the finest suited algorithm must consider number of aspects, such as medical imaging, robotic supervision of objects or satellite imagery applications, etc. It is always essential to apply external learning, in terms of memory requirement and computational complexity allowed i.e., accuracy trade off and processing time. Over all the selection of SR algorithm to each problem need a careful attention of the limitations presented by the application situation. Biomedical image enhancement is a process to remove and reduce the artifacts originated due to improper illumination ambience, several researchers addressed SR in transform domain, it is observed few pixels are lost in the process of transformation from one domain to other. Edges of objects i.e., diseased tissues involve a significant role in image analysis, identification and vision processing. As pixels are lost due transform domain mapping edges of tissues, boundaries, and textures are degraded that leads to inappropriate diagnosis. Image enhancement algorithms are always application specific; these has to be addressed the specific radiologist sensitive to contrast ratio and their preference has to be evaluated.

Figure 6.3 Machine learning methods in super resolution image enhancement.

6.3.3 Image Fusion

Image fusion is a process of combining multiplex images, from multiple sensors and combined to form single image, always this image is having more information than the individual one, and consist all relevant information. The aim of image fusion is to construct better quality, more suitable for machine and human perception. Image fusion is carried out as pixel fusion, feature fusion, and decision fusion. Image fusion has tremendous demand in the area of radiological diagnostics followed with appropriate treatment. The term multi-image fusion refers to combining multiple images of a patient from the same modalities or images taken from different modalities like MRI and CT image, etc., different image fusion techniques are available today shown in Figure 6.4 .

Figure 6.4 Image fusion techniques.

6.3.4 Image Restoration

Image restoration is a mathematical operation on corrupted digital image and estimate clear original image, the degradation of image may be due to object camera misfocus, noise, random atmospheric turbulence, and motion blur. Image enhancement and image restoration are not same, this process improves features of the image which are informative in further stages. Thus, image restoration concentrates to noise function and modeling blurring, then applying inverse model to de-blurred and de-noise the image. The objective of image restoration is to develop restoration algorithm to filter and eliminate the degradation from input image in doing this soft computing and computation intelligence play vital role.

Digital image restoration deal with method used to suppress a known degradation to recover an original image. It’s upcoming field of an image processing. The objective of image restoration is to restore distorted/degraded image to its original quality and content. Degradation is introduced by an image acquisition device due to nonlinearity of sensors, detects in optical lenses, blur due to misfocus of camera, relative object camera motion, atmospheric turbulence, etc.

It tries to minimize some parameters of degradation and reconstructing an image that has been degraded based on known degradation (prior knowledge) models and mathematical or probabilistic models. Usually, iterative restoration techniques attempt to modeling degradation and then applying the inverse process to recover the original image.

There are two subprocesses:

  1. By adding noise and blur to an image degrading the quality of an image.
  2. Recovering the original image

In restoration application deblurring is very important because visually blurring is annoying. The different kind of filters and additive noise are used for blurring an image. Quality of image is degrading by adding Gaussian and salt pepper noise, as shown in Figure 6.5.

Figure 6.5 Degradation model for blurring the image.

6.3.5 Restoration Model

This process estimates from degraded version using filter restoration blur and noise image factor is removed in order to obtained original image, as shown in Figure 6.6.

Figure 6.6 Restoration model.

Ma. Guadalupe Sanchez et al. [1]. In the recent decade, several optimization methods are proposed depending on the type of noise. This paper explained the algorithm to remove Gaussian, speckle, and impulsive noise. NDF, PGFM, and PGFND methods were used for filtration purposes and compared obtained quality result in each case. If the Gaussian and speckle noise is present NDF method perform good to reduce the noise. If the impulsive noise (fixed) is appeared in an image then the best technique is PGFM and to deal with the combination of various noises PGFND techniques is able to reduce the noise more effectively.

A. Lakshmi and Subrata Rakshit [2] described object evaluation method were analyzed by comparing proposed and distortion measures with different restoration algorithm for the estimation of undistorted image which automates the process of restoration in real time without demanding any kind of knowledge about original image and it’s derived without any assumption of image statistics and noise. The given measures have noise assessing terms and data fidelity term thus analyzing denoising as well as deblurring nature of image restoration method.

A.M. Raid [3] present image restoration based on morphological operation. There are two main morphological operations i.e, dilation and e,rosion. In dilation operation the object is expanded, thus small holes are filled and it connects to disjoint objects. The proposed methodology mainly focuses on two basic morphological algorithms (region filling and boundary extraction) and four morphological operations (opening and closing, dilation and erosion). It’s implemented using the MATLAB program with user interface which changing the SE parameters such as its type or size are simple but if the objects are near with the distance then it will be stuck together, thus it’s need to solve this program by searching objects.

6.3.6 Image Analysis

Image analysis is an extraction of meaningful information from input image, with these information algorithms will able to identify the objects, the more feature we provide the better classification and identification we can achieve. In achieving high accuracies, the algorithm needs to be more intelligent and faster to respond, these can be achieved using deep learning, neural network, capsule network, and computation intelligence.

6.3.7 Image Segmentation

Image segmentation is one of the most significant steps in digital image processing, it is to separate foreground and background, which are treated as different objects in an image. The goal of segmentation is to classify each of the pixel as one of the classes and extracting region of interest (ROI). To achieve image segmentation effectively, the object must be different from one other, such as boundary, image intensity, texture, and shape etc. The aim of this stage may be either an outline or a mask i.e., outline may be a set of curves or parametric cure, like polygonal approximation of an object outline (shape), mask is to assign pixel value 1 to objects and treat back ground pixels as 0 or vice versa. Image segmentation is one of the most complicated tasks, we have several segmentation methods and they are more application specific. Segmentation process should stop when the region of interest has been isolated. An outline of the most widespread segmentation techniques as follows, shown in the Figure 6.7.

Figure 6.7 Different image segmentation.

Edge-based Segmentation: Edge detection is a mathematical operation is an image processing, it detects boundaries of objects present in an image. It works with detecting sharp changes in intensity of the pixel that typically forms boarder between different object. Basic idea of edge detection is look for neighborhood pixel with strong sign of change. These pixels are easily detected by computer on basis of intensity differences. Mainly it’s process of finding meaningful transaction in an image. Feature extraction, image morphology, and pattern recognition can be achieved by edge detection. It extracts the features such as corners, lines, and curves of an image. Therefore, it is very easy to recognize segmentation boundaries and objects. Edge detection steps consist smoothing, enhancement, detection and localization of an image. Commonly used edge detection types are step edge, ramp edge, ridge edge, and roof edge.

Edge Detection Approaches: Spatial domain and frequency domain are two classes of edge detection. Spatial domain includes operator-based approaches categorize into first order and second order method. First order methods are Prewitt, Sobel, and Robert. Cany and Laplacian are second order methods. Fourier transform method is used to convert the image into frequency domain. Using the low frequencies, the details were extracted from the image. High frequencies were used to obtain image edges, but frequencies having certain limitations.

Gradient-based Edge Detection: It is a first order derivative which compute gradient magnitude horizontally and vertically. Implementation of gradient-based edge detection method is very simple and capable of detecting edges and their directions. But edges are not located accurately because it is sensitive to noise.

Sobel Edge Detection Operator: This operator extracts all the edges without worrying about the directions. It computes gradient approximation of image intensity. It uses the 3 × 3 kernel convolved with input image to compute vertical and horizontal approximation, respectively. It provides smoothing effect and time efficient computation. But it has certain limitations, its highly sensitive to noise, not very accurate because it does not give appropriate result on thick and rough edges.

Prewitt Operator: Prewitt edge detection operator detects vertical and horizontal edges of an image. It uses kernels or masks. It is a best operator to detect magnitude and orientation of an image.

Robert Edge Detection Operator: It is used to compute the sum of square of difference between diagonally adjacent pixels in an image through discrete differentiation. In this operator orientation and detection of edges are very easy. It preserves diagonal direction point but it’s very sensitive to noise therefore not accurate method for edge detection.

Laplacian of Gaussian (LOG): It is derivative operator that uses the Laplacian take as a second derivative of an image. It is used to find sharp edges of all directions having a fixed characteristic and easily detects the edges.

Canny Operator: It is a Gaussian-based operator. Canny operator is most commonly used because it can extract features of an image without altering the features as well as it localizes the edge points and less sensitive to noise.

Thresholding Based Segmentation: Threshold is the simplest and most powerful technique in a image segmentation. In this technique depending on intensity value the pixels are partitioned. Based on threshold value from the grey scale image it produces the binary image. It has a advantages such as fast processing speed, smaller storage space, and ease in manipulation compared with grey level image. Therefore, thresholding technique is commonly used.

Clustering: A collection of an arrangement of similar items is called clustering. The clustering method groups together a similar pattern and can produce a very good segmentation result. In clustering it is frequently important to change the information during the preprocessing and demonstrate the parameters until the outcome accomplishes the desire properties. There are different clustering methods, K-means, fuzzy C-means, mixture of Gaussians, and ANN clustering.

  • K-means clustering method uses unsupervised algorithm. The result is well separated. K-means is fast and robust but it has noisy data and a nonlinear dataset.
  • Fuzzy C-means uses an assigning membership algorithm with a cluster center. But the result is overlapped and comparatively its better than K-means.
  • Mixture of Gaussian algorithm is based on a priori “n” Gaussian. Where all data taken is the minimum and maximum from Gaussian centers. This method is best for real-world data but its complex in nature.
  • ANN clustering is based on priori data. Whose result were well separated. An ANN is mainly working on noisy image but it has slow convergence rate.

6.3.8 Region-Based Segmentation

The region-based segmentation is used to classify a particular image into number of regions or classes. Therefore, we need to estimate and classify each pixel in the image. Methods of region-based segmentation are region growing, texture-based segmentation and edge-based snakes. Region growing is general technique of image segmentation, where image characteristics are used to group neighboring pixel together to form regions. Region based techniques look for consistency within a subregion based on a property like intensity, color, texture etc. Region based segmentation starts in the middle of an object and then grows slowly towards till it meets the object boundary.

6.3.9 Watershed Segmentation

Watershed segmentation is morphology segmentation, that use watershed transform, that belongs to a region-based segmentation technique. Watershed transform can get one-pixel wide connected closed, accurate positioning of the edge. This algorithm is automatic and doesn’t require any parameters to determine the termination conditions.

Namata Mittal et al. [4] In this study, efficient edge detection approaches analyzed for image analysis. Proposed method is tested on normal and medical images and compared with all traditional edge detection algorithms. Developed method is capable to obtain better entropy value and edge continuity along with less noise proportion. It is not effective for blurry images and time consumption need to improve. The problem encountered in traditional edge detection technique i.e., connectivity and edge thickness can be solved by B-Edge, that uses multiple threshold approaches and for effective edge detection and better connectivity the proposed methodology uses the triple intensity threshold value. Finally, it was concluded that B-Edge obtained a better outcome than canny. Developed method is able to perform good connectivity with improved edge width uniformity and produces acceptable entropy value.

Yousif A Hamad et al. [5] notes a low contrast medical image edge detection based fuzzy C-mean clustering have been developed. A canny edge detection algorithm performs well among all edge detection techniques. A FCM clustering segmented the image. Algorithm and software will develop in order to provide image analysis to its primary stage. To solve urgent diagnosis problem more analysis and processing will be done for real-time clinical CT and MRI imagery.

Tessy Badriyah et al. [6] notes that stroke classification is conducted in this study. CT brain imagery consist of more noise. Mainly in thresholding the gray scale image is converted into binary image to segment the affected tissues from CT image. Here global threshold and Otsu threshold is used for classification of stroke i.e., ischemic stroke classes, no stroke (normal) and hemorrhagic stroke. The proposed methodology is experimented by analyzing three filters i.e., Gaussian filter, bilateral filter, and median filter to remove noise. Quality of image is improved using peak signal-to-noise ratio i.e., 69% and mean–square error i.e., 0.008% with bilateral filter. Otsu thresholding is used for stroke object segmentation by specifying lower threshold parameter ≤ 170.

Alexander Zotin et al. [7] describes the proposed methods to detect the brain tumor from patients MRI scan image. In the first step, noise removal functions like median filter were used to improve features of medical images for reliability and enhancing balance contrast enhancement techniques (BCET). This image is segmented by a fuzzy C-means method and canny edge detector is applied to construct the edge map of brain tumor. In this paper they have compared sensitivity and accuracy parameters of different detection methods with proposed method. After comparing they have combined cany and fuzzy C-means together for better accuracy and sensitivity than the other single methods.

Cui, Xuemei et al. [8] has proposed algorithm based on an improved watershed transform method. Watershed transform has good response to the weak edges, but it is unable to obtain meaningful segmentation result directly. Therefore, they have made some improvements. Here they have briefly explained about image segmentation, watershed algorithm, and marker extraction. Watershed transform is the morphology segmentation method, mainly used for the study of the forms shape or structure of things. This method can suppress the noise and fine texture very accurately by avoiding over segmentation.

Abubakar et al. [9] has described about two categories of image segmentation. Here they have explained about sobel, canny and Robert cross field. Image segmentation is a segmentation of an image that is used in separating the object of an image from its background. From the experiment it was observed that canny edge detector better edge detection maps. And other image thresholding success fully separated the frequency from background.

Dubey et al. [10] gave a brief review about the image segmentation using different clustering methods. Clustering is the collection of an arrangement of articles such that items in a similar gathering called cluster. Here they have described about different clustering techniques that are K-means, fuzzy C-means, ANN clustering, and a mixture of Gaussians. By doing comparative study among these techniques by taking some important parameters such as data center, algorithm used, advantages and disadvantages, and final best result. After comparing they have concluded that fuzzy C-means is better than K-means. The mixture of Gaussian is used in real-world data.

Xu Gongwen et al. [11] describes wavelet transform base medical image segmentation as a broad term that encompasses a large range of applications. In this paper, analyzed frequency and time domain tool with some good features. It reduces noise and pointed the edges more precisely. The derived model solved all the problems that occur during traditional and classic algorithm. More analysis can be done that will enable us to improve new algorithm for segmentation of medical images with very rapid, adaptable, and accurate results.

Deng Ping Fan et al. [12] notes in this paper, automatic COVID-19 infection segmentation were observed. Methodology is developed to identify the infected region using segmentation network named Inf–Net. Semi-Inf-Net and MC is used for the segmentation of Ground Glass Opacity (GGO) and consolidation infection, they are accurately segmented. This study research will focus on integration of segmentation, quantification, and detection of lung infection. Also, it will work on multi-class infection labeling for automatic AI diagnosis.

Qingsong Yao et al. [13] describes the focus of this study was to implement label free segmentation for COVID-19. We observed that normal net methodology is quite good compare to other UAD methods and NN–net using bright pixel CT imagery. It’s able to segment the COVID-19 lesion without labeling the dataset. So, it’s reduced the time and complexity in manual labeling. The proposed unsupervised methodology is good but still requires a lot of development. Thus, it can segment only small part lesion more accurately.

6.3.10 Image Compression

Image compression is essential requirement in radiological imagery application. As the spatial and temporal resolution increase, the data generated is enormous, this led to a requirement of large bandwidth for communication and large data memory for storage of data arises, best alternate way is to use data compression algorithms to reduce the data size. More over lossless compression is used, this is preferred because of no loss of information while data compression, to achieve high compression ratio we need to exploit soft computing and computational intelligence algorithms.

Pradeep Kumar and Ashish Parmar [14] explain that this paper presents the lossless, lossy, and hybrid compression techniques for medical image compression and also it describes the various watermarking method and performance matrices. From literature it’s concluded that hybrid techniques are more commonly used because it has a ability to compress both lossless and lossy in order to obtain the better compression. Performance parameters are computed based on their performance and efficiency and it’s completely based on compression ratio.

Jing-Yu Cui et. al. [15] proposed example-based texture modeling for image compression, which is one of the current standard approaches. It uses fixed dictionary with texture samplers and vector quantizer. In past work, usually best predictor was selected based on its mean square error, but in addition to this framework it consists of the prediction residual and encoding rate. The compression quality is improved by selecting the accurate predictor. It’s observed that proposed methodology performance good over JPEG and it has lower error than JPEG.

M. Moorthi and R. Amutha [16] explain that image compression mainly focuses on reducing image data, thus it’s easy to store and transmit the data efficiently. Initially, segmentation is applied to obtain the two clusters i.e., region of interest and non-region of interest. Here, higher energy cluster uses the integer wavelet transform based compression and another cluster uses JPEG, which is one of the popular image formats. Finally, the designed model was able to preserve the edge information and maintain high compression ratio to provide reliable and faster compression technique.

T. G. Shrisat and V. K. Bairagi [17] note that medical images are very sensitive and it must have very clear information without any loss, it can be achieved from lossless compression. The performance enhancement in lossless compression combining predictive coding and integer transform. By looking toward the comparative analysis its predictive method gives precise compression compare to the plane wavelet-based image compression. There is very less possibility to lost a very less amount of data, by using predictive coding technique where we consider different (subtracting original image and reconstructed image) or prediction error. System performance was computed using scale entropy and entropy for compressed images with acceptable image quality.

Jiang et al. [18] explains that this paper introduces wavelet base image compression algorithm for radiological imagery, this is one the superior method for lossless image compression and improved vector quantization. The ultimate aim of proposed methodology is to maintain medical image at high compression ratio, which contains diagnostic related data. Initially wavelet transformation was applied. For low and high frequency a lossless compression method and novel vector quantization (VQ) with variable block size has been implemented respectively. Its analyzed that experimental result of optimized method can able to improve image compression performance and achieved proper balance between image visual quality and image compression ration with better performance. Proposed method was tested on liver and brain images also observed with compression ratio of 25 with different contrast ratio.

Image synthesis is the technique of generating new images from some form of image description. These images are synthesized typically like 3-dimension organ geometric shapes. Current trend requirement of medical image processing is to synthesize 3-dimension shapes. Deep neural network and capsule network with computational intelligence it is possible to synthesis any complex shapes.

Image quantification is assessing the degree of disease for a diagnosis, this is similar to computer assessing any disease without doctor interventions. This is one of the most powerful tool and future trend in radiological application, image quantification is a method to measure and classify objects as healthy or diseased. The advantage of computerized image quantification algorithm (CADe/CADx) is its objectivity and speed.

6.4 Fuzzy Logic

Fuzzy logic is a very interesting topic of AI. It allows membership value between 0 and 1, gray level and in linguistics form such a small tolerance. In traditional logic something can be represented by either having value of true, which is 1 or false, which is 0 and in fuzzy logic you can be anywhere and between 0 and 1. So you could have something that’s true but only partially true and false with certain degree. Let’s use tap water temperature as an example in traditional logic—you would have this represent either hot or cold representing 0 or 1, respectively. But using the fuzzy logic you could have something like this we have gradient from hot to cold so you could have something that is lukewarm very hot somewhat cold etc., instead of just cold or hot.

Fuzzy logic concept we encounter in our day-to-day life like the car is fast. The bag is heavy. Today is a hot day. Our exam was easy, and so on. None of the sentences have any metrics numbers or digits with them and yet we perfectly understand and use them every day. We are doing all of this with outlet having precise information or mathematical model of the system we can use fuzzy logic to produce accurate results in presence of inaccuracies.

An example of the fuzzy logic concept for better understanding is learning to drive a car. When you are learning to drive for the first time you are much more cautious and you create a mental rule for yourself, for example, if you are driving first and distance between your car and the car in front of you is less than a certain value you should break immediately. So, the concept of driving fast is not a fixed rule or agreed value some people tend to drive faster and then others and some people are reckless when it comes to driving it means that the concept that the driving too fast can be driving above 70 mph for a new learner and someone with a couple of year of driving experience it could be over 90 mph and for some obviously driving above 135 mph or more regardless of each group of these people are driving.

If they sense that the distance between their car and the car in front of them is shorter than they will slow down immediately even the reckless ones. So, using fuzzy logic we can design and develop systems that could drive safely regardless of the type of the driver that uses them.

Fuzzy Logic Working: Fuzzy logic is an extension of Boolean logic based on the mathematical concept of fuzzy sets. Fuzzy logic contains different components, the block diagram of basic fuzzy system is as follows shown in the Figure 6.8.

Figure 6.8 Fuzzy logic block diagram.

  1. The fuzzifier is the part responsible for fuzzification. It’s the process of converting crisp set data into fuzzy set data. It has the membership function for linguistic variable of fuzzy set.
  2. Fuzzy rule-based system is an extension of fuzzy logic concept. It consists the two main components:
    • Inference engine: This process maps the fuzzy output by combining the membership function with fuzzy control rules. Fuzzy inference is the processing unit based on fuzzy set theory concept each rule having a weight between 0 and 1 and then multiplying with the membership value that is assigned to the output vector. When the input is specified the fuzzy inference, process obtain the output from the fuzzy rules-based system.
    • Knowledge base: This is the third layer of a fuzzy system. It is the most important part of a fuzzy logic system. It is a combination of database and a rule base. It will store the knowledge available about the problem being solved in linguistic “IF-THEN” rules.

      Knowledge base is constructed either by experts or self-learning algorithm.

      1) The first way is for experts to construct a rule base. Experts is a system that describes if-then rules.

      2) The second way is using self-learning to construct the rule base. In this method one part is used to train, while other part is to be solved by the system. These types of self-learning system are called neuro-fuzzy system.

      For example: Knowledge-based system in medicine

      In medical domain data is acquired from patient history, laboratory tests, physical examination, and clinical investigations. These obtained data have been converted into linguistic concept to idle medical knowledge level such as treatment recommendations, diseases description, and prognostic information.

  3. Defuzzifier is the process that maps fuzzy inference output into crisp logic based on corresponding membership degrees and fuzzy set. The decision-making algorithm selects bet crisp value.

J. Greeda et Al. [19] present application of the fuzzy expert system (FES) in medicine that has been discovered to support a practioner for decision making. Fuzzy set theory plays very important role in diagnostic decision. FES used in prediction of patients conditions, patients monitoring, handling of fuzzy queries, and prediction of aneurysm, fracture healing etc. Fuzzy logic is based on human thinking decision building that produces qualitative quantitative evaluation of medical facts.

Shruti Kambalimath and Paresh C. [20] describe how fuzzy models were developed in various hydrology and water resources. The fuzzy logic-based system deal with the problems that have uncertainty, approximation vagueness, and partial truth where the data are limited, but it is not suitable for mathematical imagery and solutions because of absence of mathematical explanation. This paper suggested that hybrid fuzzy modeling is more efficient where it will be combined with ANN fuzzy SVM model to obtain better accuracy than pure fuzzy model.

Novruz Allahverdi’s [21] paper describes application of FES in medical area such as determination of diseases risk and coronary heart diseases risk, periodontal dental disease, child anemia, etc. It’s concluded that fuzzy control and hybrid system will give effective outcome in upcoming future. The proposed fuzzy model has been trained and tested to get proper approximation between predicted value and measured value for precise outcome.

Jinsa Kuruvilla and K. Gunavathi [22] propose FIS and ANFIS models for classification of lung cancer using CT images. Morphological operations are used to segment the lung lob from CT images and classification is done by statistical and GLCM parameters. Cluster shade, dissimilarities, skewness, and difference variance are some parameters selected by principal component analysis that is used for feature selection purpose. Adaptive neuro fuzzy inference system uses the modified training algorithm and obtained 94% classification accuracy where FIS obtained 91.4% accuracy.

Maria Augusta et al. [23] presents a fuzzy inference system created to support medical diagnoses in real time. In this paper they have done some analysis on public and private health-care services. Which describes the problems in health-care sectors are like poor allocation of resources, social inequality, Inefficiency and absence of preventive medicine. This paper has used preventive medicine as the output of the intelligent system. Using fuzzy intelligent system, they have shown the feasibility of generating new channels for medical cost. They are working to improve hospital marketing, socially responsible, to minimize wait time, cost and marketing customer contented.

6.5 Artificial Neural Network

In 1943, ANN was first time proposed by Warren McCulloch and Walter Pitts. Neural networks are inspired by human brain biological neurons; ANN are built on assemblies of connected nodes or units know as artificial neurons. Individual connection identical to synapses in a biological brain, capable of transmitting information to other neurons. These artificial neurons once receive an information process and sends to next connected neurons, the results of individual neurons are calculated by nonlinear function of the sum of its inputs. Neurons have weights they alter the learning process, these weights changes based on certain threshold. A simple neural network mainly consists input, output, and hidden layers, the quantity of hidden layer depends on the requirements, shown in Figure 6.9. ANN, finds many applications in radiological imagery, starting from prepressing to identification and classification. ANN are the best when the data dimensions are small but when the data is large, they don’t perform good, due to this most of the current advanced application is radiological imaging used Deep Learning Deep Neural Networks.

Figure 6.9 Biological and artificial neural networks.

ANN is a mathematical illustration of human neural architecture that reflects its “generalization” and “learning” abilities. Thus, it belongs to AI. ANN is broadly applied in research because they can model nonlinear structure where the variables relationship is unknown and very complex. ANN can have single or multiple layers. It consists of series of neurons or nodes that are interconnected in layer by using a set of adjustable weights. Each neuron is connected with each and every neuron in the next organized layer. Generally, ANN consists of three layers i.e., input layer, hidden layer, and output layer. The input layer neurons receive the information and that will be passes to the next hidden layer via weight links. Here, one or more hidden layer processed the data mathematically and try to extract the pattern. Each neuron has weighted inputs, transfer function, and single output. Neuron is activated by weighted sum of inputs it receives and activation signal process through a transfer function to produce a single output. Ultimately, last layer neuron provides final network’s output.

ANN classification is in Figure 6.10, shows the multiple types of neural networks. Based on the application most suitable neural network uses with their own specifications and levels of complexity, mainly two types of ANN.

Figure 6.10 Framework for ANN classification.

  1. Feed-forward neural network

    It is used more commonly in which the information is unidirectional i.e., from input to output. No feedback loops are present in this type of ANN. It’s used for recognition of pattern and it contains fixed input and output.

  2. Feedback ANN

    In this particular ANN, it allows feedback loops. It’s used by internal system error connections and used in content addressable memories.

Al-Shayea [24] reports the proposed diagnosis ANN is a powerful tool that deals with the complex clinical data and help the doctor for proper treatment of diagnosis. In this paper we analyzed the two cases, first is acute nephritis and the second is heart disease. Feed-forward back propagation network with supervised learning [1] is used as classifier in both diseases where, it’s able to classify infected or non-infected person in heart diseases by 95% correctly classified while in acute nephritis, network having the abilities to learn the pattern based upon selected symptoms and proposed network were capable to classify with 99% accuracy.

Shahid et al. [25] proposed study estimated that ANN can be applied to all level of health care organizational decision making. It’s found that hybrid approaches are very effective to reducing challenges such as having insufficient data or new item is introducing to the system. The most successful purposes of ANN is observed in extraordinarily complex medical situations. It’s found that ANN to be often used for prediction, classification, and clinical diagnosis in area of telemedicine, organizational behavior, and cardiovascular.

Amato et al. [26] describes ANN is a powerful framework to assist physicians and other enforcement. ANN has proven suitable for various diseases and their use makes the diagnosis analysis more reliable and consequently increases patient satisfaction. This paper was describing the workflow of ANN analysis, which include, the major steps such as feature selection, database building, preprocessing, training, testing and verification for rapid and correct diagnosis prediction of various diseases.

Abiodun et al. [27] survey was present the application of neural network in real- world scenario. It’s concluded that ANN can apply to any areas of industries, biomedical, and profession fields. Based on data analysis factors it’s observed that ANN is more effective, successful, and efficient. Therefore, it has ability to solve complex and non-complex real life problems. Finally result can be summarized on various fields of ANN applications regarding pattern recognition, prediction, and classification.

Mossalam and Mohamad Arafa [28] the proposed model uses the databases of 48 projects. The study aim is to identify the variables and enterprises databases that define project criticality also, the related information to build strong neural network model. The four major steps are implements to develop and test the proposed ANN model i.e., data preparation, training, testing, and sensitivity analysis. This research uses three configuration models of nets to develop intelligent model rather than existed manual selection process. Result has been observed by comparative analysis of PN method, Multi-layer feed forward network (MLFN) and best net search. Best net search generated best predictions result for the data.

6.6 Evolutionary Computation

These algorithms are very much preferred in the area where mathematical are incompatible to solve the broader range of problems and usually in the application of DNA analysis and scheduling problems, example is shown in Figure 6.11. One of the prominent evolutionary algorithms is Genetic algorithm. Below shows the procedure of genetic algorithm is these evolutionary algorithms aim in bringing out novel artificial evolutionary techniques exploiting the strength of the natural evolution and are most probably engaged in the search optimization problems that requires an optimal result.

Figure 6.11 Evolutionary computation.

Pena-Reyes and Moshe Sipper [29] note that the paper focused mainly on evolutionary computation (EC) and its medical application and observed the effectiveness of various evolutionary algorithms in medicine. EC makes the use of metaphor of natural evolution. EC family introduced powerful techniques that are used to search complex spaces. There are mainly three tasks which are demonstrated by EC i.e., data mining; signal processing and medical imaging; and scheduling and planning. In data mining EC work as a parameters filter. It has the ability to discover the necessary knowledge for interpretation of accumulated information. The prognosis is used frequently because of its predictive nature. EC used to performance improvement in signal processing algorithms such as compressor and filters and also from welter of data the required clinical data are extracted by taking the help of EC. It plays a very important role in planning and scheduling. Specifically, for 3-dimensional radiography, different medical procedures, and many more. EC applied in medicine in order to perform several tasks in diagnosis, especially for decision support.

Mahesh and J. Arokia Renjit [30] the actual intention of this survey paper was to review and study the segmentation and evolutionary intelligence that include various classification techniques. This review article very nicely present segmentation and classification-based approaches for the recognition of brain tumor from MRI imagery. Here, 50 research papers were studied and analysis was done accordingly, which was mainly emphasis toward the utilized image datasets, feature extraction techniques, image modality, evolution measures, implementation tools, and final achieved results. It’s concluded that performance of all proposed techniques were good with respect to different modalities and its requirements, but still lots of improvement is necessary to get desire outcome. Compared to all the other existed classification-based techniques hybrid techniques gave good classification accuracy. Here, nonlinear classification and 3-dimensional evolution is complex. It has been seen that a greater number of researchers focused on classification techniques for recognition of tumor.

Nakane et al. [31], this literature review paper briefly summarized computer vision applications using evolutionary algorithms (EAs) and Swarm algorithm (SAs) and observed characteristics and differences. The proposed study concentrated on four algorithms i.e., differential evolution (DE) and genetic algorithm (GA), that belongs to EAs and colony optimization (CO) and particle swarm optimization (PSO) are belongs to SAs. Among all of the four representative algorithms of EAs and SAs, the GA and PSO are more commonly adopted in computer vision application because of its improved efficiency, parameter tuning and practical applications. Combination of the EAs/SAs and also deep neural networks such as neural architecture search is one of the popular fields of research. Ultimate aim of computer vision is to understand meaningful information and extract the features from videos and images, it’s concluded that evolutionary algorithms and swarm algorithm have a potential to solve the various complex problem very precisely.

Holmes [32] evolutionary computation is a more popular approach that could be used alone or with machine learning. Proposed approach of evolutionary computation was divided as genetics based and non-genetic based algorithm. It’s model free and also has a capability to provide meta-heuristic structure, where there is no need of any perfect data and prior assumptions. Thus, it can solve the wide range of problem. Genetic algorithm can be used to identify MRNA targets, also it identifies lesion on mammogram and mining temporal workflow data etc.

Slowik and Halina Kwasnicka [33] present on application of evolutionary family for real-life problems. A complete family of evolutionary optimization algorithm is considered as evolutionary computation. The paper describes the main properties of various evolutionary computation algorithms. For the easy implementation pseudo-code form is presented for each technique of EC. The described literature review gives the overview of all EC methods, which was suitable for many industrial and engineering problem, but there were some little gaps between practical and theoretical aspects. Currently EAs are modifying for hybridization of more algorithms in order to obtain better performance result.

6.7 Challenges

Tremendous improvements in image acquisition sensors has revolutionized the radiological imagery applications, over the past two decades image quality and information obtained is very large and it is helping radiologist at the same time for proper and accurate assessment of disease. Now big challenges are to develop hardware architectures that can process these data at high speed at affordable cost, there is a need to improve the processing speed.

6.8 Conclusion

This chapter provided an overview of computational intelligence application to radiological imagery. It introduced the fundamental principles of digital image processing, steps involved in image computation, and coverd computational intelligence paradigms based on fuzzy logic, ANNs, and EC. Finally, the chapter described a few applications of these paradigms and emphasized how algorithm could be made more intelligent and process at high speed with better accuracy.

Bibliography

  1. 1 Sanchez, M.G., Sánchez, M.G., Vidal, V., Verdu, G., Verdú, G., Mayo, P., and Rodenas, F. (2012). Medical image restoration with different types of noise. 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 4382–4385. IEEE.
  2. 2 Lakshmi, A. and Rakshit, S. (2010). An objective evaluation method for image restoration. Journal of Electrical and Computer Engineering 2010.
  3. 3 Raid, A.M., Khedr, W.M., El-Dosuky, M.A., and Aoud, M. (2014). Image restoration based on morphological operations. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT) 4 (3): 9–21.
  4. 4 Mittal, M., Verma, A., Kaur, I., Kaur, B., Sharma, M., Goyal, L.M., Roy, S., and Kim, T.-H. (2019). An efficient edge detection approach to provide better edge connectivity for image analysis. IEEE Access 7: 33240–33255.
  5. 5 Hamad, Y.A., Simonov, K., and Naeem, M.B. (2018). Brain’s tumor edge detection on low contrast medical images. 2018 1st Annual International Conference on Information and Sciences (AiCIS), 45–50. IEEE.
  6. 6 Badriyah, T., Sakinah, N., Syarif, I., and Syarif, D.R. (2019). Segmentation stroke objects based on CT scan image using thresholding method. 2019 First International Conference on Smart Technology & Urban Development (STUD), 1–6. IEEE.
  7. 7 Zotin, A., Simonov, K., Kurako, M., Hamad, Y., and Kirillova, S. (2018). Edge detection in MRI brain tumor images based on fuzzy C-means clustering. Procedia Computer Science 126: 1261–1270.
  8. 8 Cui, X., Deng, Y., Yang, G., and Wu, S. (2014). An improved image segmentation algorithm based on the watershed transform. 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference, 428–431. IEEE.
  9. 9 Abubakar, F.M. (2012). A study of region-based and contour-based image segmentation. Signal & Image Processing 3 (6): 15.
  10. 10 Dubey, S.K. and Vijay, S. (2018). A review of image segmentation using clustering methods. International Journal of Applied Engineering Research 13: 2484–2489.
  11. 11 Gongwen, X., Zhijun, Z., Weihua, Y., and Li’Na, X. (2014). On medical image segmentation based on wavelet transform. 2014 Fifth International Conference on Intelligent Systems Design and Engineering Applications, 671–674. IEEE.
  12. 12 Fan, D.-P., Zhou, T., Ji, G.-P., Zhou, Y., Chen, G., Fu, H., Shen, J., and Shao, L. (2020). Inf-net: automatic covid-19 lung infection segmentation from CT images. IEEE Transactions on Medical Imaging 39 (8): 2626–2637.
  13. 13 Yao, Q., Xiao, L., Liu, P., and Zhou, S. K. (2021). Label-free segmentation of covid-19 lesions in lung ct. IEEE Transactions on Medical Imaging.
  14. 14 Kumar, P. and Parmar, A. (2020). Versatile approaches for medical image compression: a review. Procedia Computer Science 167: 1380–1389.
  15. 15 Cui, J.-Y., Mathur, S., Covell, M., Kwatra, V., and Han, M. (2010). Example-based image compression. 2010 IEEE International Conference on Image Processing, 1229–1232. IEEE.
  16. 16 Moorthi, M. and Amutha, R. (2011). An improved algorithm for medical image compression. International Conference on Computing and Communication Systems, 451–460. Berlin, Heidelberg: Springer.
  17. 17 Shirsat, T. G. and Bairagi, V. K. (2013). Lossless medical image compression by integer wavelet and predictive coding. International Scholarly Research Notices 2013.
  18. 18 Jiang, H., Ma, Z., Hu, Y., Yang, B., and Zhang, L. (2012). Medical image compression based on vector quantization with variable block sizes in wavelet domain. Computational Intelligence and Neuroscience 2012.
  19. 19 Greeda, J., Mageswari, A., and Nithya, R. (2018). A study on fuzzy logic and its applications in medicine. International Journal of Pure and Applied Mathematics 119 (16): 1515–1525.
  20. 20 Kambalimath, S. and Deka, P. C. (2020). A basic review of fuzzy logic applications in hydrology and water resources. Applied Water Science 10 (8): 1–14.
  21. 21 Allahverdi, N. (2014). Design of fuzzy expert systems and its applications in some medical areas. International Journal of Applied Mathematics Electronics and Computers 2 (1): 1–8.
  22. 22 Kuruvilla, J. and Gunavathi, K. (2015). Lung cancer classification using fuzzy logic for CT images. International Journal of Medical Engineering and Informatics 7 (3): 233–249.
  23. 23 de Medeiros, I. B., Machado, M. A. S., Damasceno, W. J., Caldeira, A. M., dos Santos, R. C., and da Silva Filho, J.B. (2017). A fuzzy inference system to support medical diagnosis in real time. Procedia Computer Science 122: 167–173.
  24. 24 Al-Shayea, Q., El-Refae, G., and Yaseen, S. (2013). Artificial neural networks for medical diagnosis using biomedical dataset. International Journal of Behavioural and Healthcare Research 21 4 (1): 45–63.
  25. 25 Shahid, N., Rappon, T., and Berta, W. (2019). Applications of artificial neural networks in health care organizational decision-making: a scoping review. PloS One 14 (2): e0212356.
  26. 26 Amato, F., López, A., Peña-Méndez, E. M., Vaňhara, P., Hampl, A., and Havel, J. (2013). Artificial neural networks in medical diagnosis. 47–58.
  27. 27 Abiodun, O. I., Jantan, A., Omolara, A. E., Dada, K. V., Mohamed, N. A., and Arshad, H. (2018). State-of-the-art in artificial neural network applications: a survey. Heliyon 4 (11): e00938.
  28. 28 Mossalam, A. and Arafa, M. (2018). Using artificial neural networks (ANN) in projects monitoring dashboards’ formulation. HBRC Journal 14 (3): 385–392.
  29. 29 Pena-Reyes, C.A. and Sipper, M. (2000). Evolutionary computation in medicine: an overview. Artificial Intelligence in Medicine 19 (1): 1–23.
  30. 30 Mahesh, K.M. and Renjit, J.A. (2018). Evolutionary intelligence for brain tumor recognition from MRI images: a critical study and review. Evolutionary Intelligence 11 (1): 19–30.
  31. 31 Nakane, T., Bold, N., Sun, H., Lu, X., Akashi, T., and Zhang, C. (2020). Application of evolutionary and swarm optimization in computer vision: a literature survey. IPSJ Transactions on Computer Vision and Applications 12 (1): 1–34.
  32. 32 Holmes, J.H. (2014). Methods and applications of evolutionary computation in biomedicine. Journal of Biomedical Informatics 49 (C): 11–15.
  33. 33 Slowik, A. and Kwasnicka, H. (2020). Evolutionary algorithms and their applications to engineering problems. Neural Computing and Applications: 1–17.

Acknowledgment

We would like to express our gratitude to Principal, KLS GIT, Belagavi and the management of KLS society for providing opportunity to carry out research in association with JN Medical College, KLE Academy of Higher Education and Research (Deemed-to-be-University), Belagavi.

Dr. Hari Prabhat Gupta (SMIEEE, [email protected], https://sites.google.com/site/hprabhatgupta) is an Assistant Professor in the Department of Computer Science and Engineering, Indian Institute of Technology (BHU) Varanasi, INDIA. Previously, he was a Technical Lead in Samsung R&D Bangalore, India. He received his Ph.D. and M.Tech. degrees in Computer Science and Engineering from Indian Institute of Technology Guwahati in 2014 and 2010 respectively; and his B.E. degree in Computer Science and Engineering from Govt. Engineering College Ajmer, India. His research interests include the Internet of things (IoT), Wireless Sensor Networks (WSN), and Human-Computer Interaction (HCI). Dr. Gupta got various awards such as Samsung Spot Award for outstanding contribution in research, IBM GMC project competition, and TCS Research Fellowship. He has guided 3 Ph.D. thesis and 5 M.Tech. dissertations. He has completed two sponsored projects. He has published three patients and more than 100 IEEE Journal and conference papers.

Swati Chopade ([email protected]) received M.Tech Degree in Computer Science and Engineering from VJTI, Mumbai, India. Presently, she is pursuing Ph.D in Department of Computer Science and Engineering, IIT (BHU) Varanasi. Her research interests include machine learning, sensor networks, and cloud computing.

Dr. Tanima Dutta (SMIEEE, [email protected], https://sites.google.com/site/drtanimadutta) is an Assistant Professor in the Department of Computer Science and Engineering, Indian Institute of Technology (Banaras Hindu University), Varanasi, India. Previously, she was a Researcher in TCS Research & Innovation, Bangalore, India. She received Ph.D. in Dept. of Computer Science and Engineering, Indian Institute of Technology (IIT) Guwahati in 2014. Her Ph.D. was supported by TCS (Tata Consultancy Services) Research Fellowship and she received SAIL (Steel Authority of India Limited) Undergraduate Scholarship for perusing her B.Tech. Degree. Her research interests include (MAJOR) Deep Neural Networks, Machine Learning, Computer Vision, and Image Forensics and (MINOR) Human-Computer Interaction (HCI) and Intelligent Internet of Things (IIoT).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.176.38