sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence Compression and Acceleration for Smart Sensing Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 July 2023) | Viewed by 6950

Special Issue Editors


E-Mail
Guest Editor
L@bISEN, ISEN Yncrea Ouest, 33 quater chemin du champ de Manœuvre, 44470 Carquefou, France
Interests: digital VLSI design; Smart Vision systems; FPGA design; image and video coding; reconfigurable architectures

E-Mail
Guest Editor
L@bISEN, ISEN Yncrea Ouest, 33 quater chemin du champ de Manœuvre, 44470 Carquefou, France
Interests: computer vision

E-Mail Website
Guest Editor
L@bISEN, ISEN Yncrea Ouest, 33 quater chemin du champ de Manœuvre, 44470 Carquefou, France
Interests: machine learning; deep learning; computer vision

Special Issue Information

Dear Colleagues,

In recent years, deep neural networks (DNNs) have achieved overwhelming success in different artificial intelligence applications. This great success is mainly due to the availability of the GPU and TPU clusters that can train very deep models with thousands of layers and millions/billions of parameters on large-scale datasets. However, such cumbersome DNNs require heavy computation resources, which makes their deployment on devices with limited computational capacity and memory (embedded devices, mobile phones, etc.) very difficult. To overcome this limitation, algorithmic, architectural, and technological efforts could be made. From the algorithmic point of view, DNN compression techniques seem to be an attractive solution. Moreover, some innovative architectures and design flows have been attempted for deployment to reach the compromise between precision and energy efficiency. To summarize, the main challenge is to propose heavy architectures or optimized algorithms that achieve approximately the same performance when compared to the original versions.

This Special Issue aims to cover the new developments and recent advances in the compression of the deep neural networks for real-time applications. The topic includes but is not limited to the following:

  • Cloud/FoG/Edge DNN challenges;
  • Knowledge distillation;
  • Parameters pruning and quantization;
  • Design flow and low power systems;
  • Low-rank factorization;
  • Transferred compact convolutional filters;
  • Hardware accelerators.

Dr. Jridi Maher
Dr. Thibault Napoléon
Dr. Ayoub Karine
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep neural networks
  • knowledge distillation
  • pruning/quantization model compression
  • computer visions
  • internet of things
  • CNN
  • FPGA, SoPC, GPU
  • IoMT : internet of multimedia things
  • algorithmic optimization
  • computational complexity reduction
  • AI implementation challenges
  • image processing

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 9810 KiB  
Article
Deep Learning Accelerators’ Configuration Space Exploration Effect on Performance and Resource Utilization: A Gemmini Case Study
by Dennis Agyemanh Nana Gookyi, Eunchong Lee, Kyungho Kim, Sung-Joon Jang and Sang-Seol Lee
Sensors 2023, 23(5), 2380; https://doi.org/10.3390/s23052380 - 21 Feb 2023
Cited by 2 | Viewed by 2465
Abstract
Though custom deep learning (DL) hardware accelerators are attractive for making inferences in edge computing devices, their design and implementation remain a challenge. Open-source frameworks exist for exploring DL hardware accelerators. Gemmini is an open-source systolic array generator for agile DL accelerator exploration. [...] Read more.
Though custom deep learning (DL) hardware accelerators are attractive for making inferences in edge computing devices, their design and implementation remain a challenge. Open-source frameworks exist for exploring DL hardware accelerators. Gemmini is an open-source systolic array generator for agile DL accelerator exploration. This paper details the hardware/software components generated using Gemmini. The general matrix-to-matrix multiplication (GEMM) of different dataflow options, including output/weight stationary (OS/WS), was explored in Gemmini to estimate the performance relative to a CPU implementation. The Gemmini hardware was implemented on an FPGA device to explore the effect of several accelerator parameters, including array size, memory capacity, and the CPU/hardware image-to-column (im2col) module, on metrics such as the area, frequency, and power. This work revealed that regarding the performance, the WS dataflow offered a speedup of 3× relative to the OS dataflow, and the hardware im2col operation offered a speedup of 1.1× relative to the operation on the CPU. For hardware resources, an increase in the array size by a factor of 2 led to an increase in both the area and power by a factor of 3.3, and the im2col module led to an increase in area and power by factors of 1.01 and 1.06, respectively. Full article
Show Figures

Figure 1

19 pages, 4217 KiB  
Article
Unsupervised Domain Adaptive 1D-CNN for Fault Diagnosis of Bearing
by Xiaorui Shao and Chang-Soo Kim
Sensors 2022, 22(11), 4156; https://doi.org/10.3390/s22114156 - 30 May 2022
Cited by 20 | Viewed by 2551
Abstract
Fault diagnosis (FD) plays a vital role in building a smart factory regarding system reliability improvement and cost reduction. Recent deep learning-based methods have been applied for FD and have obtained excellent performance. However, most of them require sufficient historical labeled data to [...] Read more.
Fault diagnosis (FD) plays a vital role in building a smart factory regarding system reliability improvement and cost reduction. Recent deep learning-based methods have been applied for FD and have obtained excellent performance. However, most of them require sufficient historical labeled data to train the model which is difficult and sometimes not available. Moreover, the big size model increases the difficulties for real-time FD. Therefore, this article proposed a domain adaptive and lightweight framework for FD based on a one-dimension convolutional neural network (1D-CNN). Particularly, 1D-CNN is designed with a structure of autoencoder to extract the rich, robust hidden features with less noise from source and target data. The extracted features are processed by correlation alignment (CORAL) to minimize domain shifts. Thus, the proposed method could learn robust and domain-invariance features from raw signals without any historical labeled target domain data for FD. We designed, trained, and tested the proposed method on CRWU bearing data sets. The sufficient comparative analysis confirmed its effectiveness for FD. Full article
Show Figures

Figure 1

Back to TopTop