Also In This Section
  • Categories

  • Recent News

  • Category: Rawashdeh

    Driving in the Snow is a Team Effort for AI Sensors

    by Allison Mills, University Marketing and Communications

    A major challenge for fully autonomous vehicles is navigating bad weather. Snow especially confounds crucial sensor data that helps a vehicle gauge depth, find obstacles and keep on the correct side of the yellow line, assuming it is visible. Averaging more than 200 inches of snow every winter, Michigan’s Keweenaw Peninsula is the perfect place to push autonomous vehicle tech to its limits.

    In two papers presented at SPIE Defense + Commercial Sensing 2021, researchers from Michigan Technological University discuss solutions for snowy driving scenarios that could help bring self-driving options to snowy cities like Chicago, Detroit, Minneapolis and Toronto.

    The team includes Nathir Rawashdeh and doctoral student Abu-Alrub (CC) as well as Jeremy Bos and student researchers Akhil Kurup, Derek Chopp and Zach Jeffries (ECE).

    Read more about their collaborative mobility research on mtu.edu/news.

    This MTU news story was published by Science DailyTechXploreKnowridge Science Report and other research news aggregators.


    Nathir Rawashdeh Publishes Paper at SPIE Conference

    Nathir Rawashdeh (AC) led the publication of a paper at the recent online SPIE Defense + Commercial Sensing / Autonomous Systems 2021 Conference.

    The paper, entitled “Drivable path detection using CNN sensor fusion for autonomous driving in the snow,” targets the problem of drivable path detection in poor weather conditions including on snow-covered roads. The authors used artificial intelligence to perform camera, radar and LiDAR sensor fusion to detect a drivable path for a passenger car on snow-covered streets. A companion video is available. 

    Co-authors include Jeremy Bos (ECE).


    Nathir Rawashdeh Presents, Publishes Research at Mechatronics Conference

    A conference paper published in IEEE Xplore entitled, “Interfacing Computing Platforms for Dynamic Control and Identification of an Industrial KUKA Robot Arm” has been published by Assistant Professor Nathir Rawashdeh, Applied Computing.

    In this work, a KUKA robotic arm controller was interfaced with a PC using open source Java tools to record the robot axis movements and implement a 2D printing/drawing feature.

    The paper was presented at the 2020 21st International Conference on Research and Education in Mechatronics (REM). Details available at the IEEE Xplore database.


    Nathir Rawashdeh Publishes Paper in BioSciences Journal

    A paper co-authored by Assistant Professor Nathir Rawashdeh (DataS, Applied Computing) on Skin Cancer Image Feature Extraction, has been published this month in the EurAsian Journal of BioSciences.

    View the open access article, “Visual feature extraction from dermoscopic colour images for classification of melanocytic skin lesions,” here.

    Additional authors are Walid Al-Zyoud, Athar Abu Helou, and Eslam AlQasem, all with the Department of Biomedical Engineering, German Jordanian University, Amman, Jordan.

    Citation: Al-Zyoud, Walid et al. “Visual feature extraction from dermoscopic colour images for classification of melanocytic skin lesions”. Eurasian Journal of Biosciences, vol. 14, no. 1, 2020, pp. 1299-1307.

    Rawashdeh’s interests include unmanned ground vehicles, electromobility, robotics, image analysis, and color science. He is a senior member of the IEEE.


    Computing Awards COVID-19 Research Seed Grants

    The College of Computing is pleased to announce that it has awarded five faculty seed grants, which will provide immediate funding in support of research projects addressing critical needs during the current global pandemic.

    Tim Havens, College of Computing associate dean for research, said that the faculty seed grants will enable progress in new research that has the potential to make an impact on the current research. Additional details will be shared soon.


    Congratulations to the winning teams!

    Guy Hembroff (AC, HI): “Development of a Novel Hospital Use Resource Prediction Model to Improve Local Community Pandemic Disaster Planning”

    Leo Ureel (CS) and Charles Wallace (CS): “Classroom Cyber-Physical Simulation of Disease Transmission”

    Bo Chen (CS): “Mobile Devices Can Help Mitigate Spreading of Coronavirus”

    Nathir Rawashdeh (AC, MERET): “A Tele-Operated Mobile Robot for Sterilizing Indoor Space Using UV Light” (A special thanks to Paul Williams, who’s generous gift to support AI and robotics research made this grant possible)

    Weihua Zhou (AC, HI) and Jinshan Tang (AC, MERET): “KD4COVID19: An Open Research Platform Using Feature Engineering and Machine Learning for Knowledge Discovery and Risk Stratification of COVID-19″

    Weihua Zhou

    Nathir Rawashdeh

    Jinshan Tang

    Guy Hembroff

    Leo Ureel

    Charles Wallace

    Bo Chen


    Technical Paper by Nathir Rawashdeh Accepted for SAE World Congress

    An SAE technical paper, co-authored by Nathir Rawashdeh, assistant professor, CMH Division, College of Computing, has been accepted for publication at the WCX SAE World Congress Experience, April 21-23, 2020, in Detroit, MI.  The title of the paper is “Mobile Robot Localization Evaluations with Visual Odometry in Varying Environments using Festo-Robotino.” 

    Abstract: Autonomous ground vehicles can use a variety of techniques to navigate the environment and deduce their motion and location from sensory inputs. Visual Odometry can provide a means for an autonomous vehicle to gain orientation and position information from camera images recording frames as the vehicle moves. This is especially useful when global positioning system (GPS) information is unavailable, or wheel encoder measurements are unreliable. Feature-based visual odometry algorithms extract corner points from image frames, thus detecting patterns of feature point movement over time. From this information, it is possible to estimate the camera, i.e. the vehicle’s motion. Visual odometry has its own set of challenges, such as detecting an insufficient number of points, poor camera setup, and fast passing objects interrupting the scene. This paper investigates the effects of various disturbances on visual odometry. Moreover, it discusses the outcomes of several experiments performed utilizing the Festo-Robotino robotic platform. The experiments are designed to evaluate how changing the system’s setup will affect the overall quality and performance of an autonomous driving system. Environmental effects such as ambient light, shadows, and terrain are also investigated. Finally, possible improvements including varying camera options and programming methods are discussed.

    Learn more.


    Nathir Rawashdeh to Present Paper at Advances in Mechanical Engineering Conference

    Nathir Rawashdeh

    A conference paper co-authored by Nathir Rawashdeh (CC/MERET), has been accepted for presentation and publication at the 5th International Conference on Advances in Mechanical Engineering, December 17-19, 2019, in Istanbul, Turkey.

    The paper is entitled, “Effect of Camera’s Focal Plane Array Fill Factor on Digital Image Correlation Measurement Accuracy.” Co-authors are Ala L. Hijazi of German Jordanian University, and Christian J. Kähler of Universität der Bundeswehr München.

    Abstract: The digital image correlation (DIC) method is one of the most widely used non-invasive full-field methods for deformation and strain measurements. It is currently being used in a very wide variety of applications including mechanical engineering, aerospace engineering, structural engineering, manufacturing engineering, material science, non-destructive testing, biomedical and life sciences. There are many factors that affect the DIC measurement accuracy where that includes; the selection of the correlation algorithm and parameters, the camera, the lens, the type and quality of the speckle pattern, the lightening conditions and surrounding environment. Several studies have addressed the different factors influencing the accuracy of DIC measurements and the sources of error. The camera’s focal plane array (FPA) fill factor is one of the parameters for digital cameras, though it is not widely known and usually not reported in specs sheets. The fill factor of an imaging sensor is defined as the ratio of a pixel’s light sensitive area to its total theoretical area. For some types of imaging sensors, the fill factor can theoretically reach 100%. However, for the types of imaging sensors typically used in most digital cameras used in DIC measurements, such as the “interline” charge coupled device CCD and the complementary metal oxide semiconductor (CMOS) imaging sensors, the fill factor is much less than 100%. It is generally believed that the lower fill factor may reduce the accuracy of photogrammetric measurements. But nevertheless, there are no studies addressing the effect of the imaging sensor’s fill factor on DIC measurement accuracy. We report on research aiming to quantify the effect of fill factor on DIC measurements accuracy in terms of displacement error and strain error. We use rigid-body-translation experiments then numerically modify the recorded images to synthesize three different types of images with 1/4 of the original resolution. Each type of the synthesized images has different value of the fill factor; namely 100%, 50% and 25%. By performing DIC analysis with the same parameters on the three different types of synthesized images, the effect of fill factor on measurement accuracy may be realized. Our results show that the FPA’s fill factor can have a significant effect on the accuracy of DIC measurements. This effect is clearly dependent on the type and characteristics of the speckle pattern. The fill factor has a clear effect on measurement error for low contrast speckle patterns and for high contrast speckle patterns (black dots on white background) with small dot size (3 pixels dot diameter). However, when the dot size is large enough (about 7 pixels dot diameter), the fill factor has very minor effect on measurement error. In addition, the results also show that the effect of the fill factor is also dependent on the magnitude of translation between images. For instance, the increase in measurement error resulting from low fill factor can be more significant for subpixel translations than large translations of several pixels.
    Request the full paper here.


    Nathir Rawashdeh To Present Talk Fri., Dec. 6

    Nathir Rawashdeh

    Nathir Rawashdeh, College of Computing Assistant Professor of Mechatronics, Electrical, and Robotics Engineering Technology, will present a talk this Friday, December 6, from 3:00 to 4:00 p.m., in Rekhi 214. Rawashdeh will present a review of recent advancements in Unmanned Ground Vehicle (UGV) applications, hardware, and software with a focus on vehicle localization and autonomous navigation. Refreshments will be served.

    Abstract: Unmanned Ground Vehicles (UGV) are being applied in many scenarios including, indoors, outdoors, and even extraterrestrial. Advancements in hardware and software algorithms reduce their cost and enable the creation of complete UGV platforms designed for custom application development, as well as research into new sensors and algorithms.