In this contributed article, editorial consultant Jelani Harper suggests that since there are strengths and challenges for each form of AI, prudent organizations will combine these approaches for the most effective results. Certain solutions in this space combine vector databases and applications of LLMs alongside knowledge graph environs, which are ideal for employing Graph Neural Networks and other forms of advanced machine learning.
Scaling Data Quality with Computer Vision on Spatial Data
In this contributed article, editorial consultant Jelani Harper discusses a number of hot topics today: computer vision, data quality, and spatial data. Computer vision is an extremely viable facet of advanced machine learning for the enterprise. Its utility for data quality is evinced from some high profile use cases. This technology can produce similar boons for other facets of the ever-shifting data ecosystem.
CEA-Leti Paper Reports Memristor-Based Bayesian Neural Network Implementation
GRENOBLE, France – Dec. 7, 2023 – A team comprising CEA-Leti, CEA-List and two CNRS laboratories has published a paper in Nature Communications presenting what the authors said is the first complete memristor-based Bayesian neural network implementation for a real-world task — classifying types of arrhythmia recordings with precise aleatoric and epistemic uncertainty. Considering medical-diagnosis and […]
Video Highlights: Attention Is All You Need – Paper Explained
In this video presentation, Mohammad Namvarpour presents a comprehensive study on Ashish Vaswani and his coauthors’ renowned paper, “Attention Is All You Need.” This paper is a major turning point in deep learning research. The transformer architecture, which was introduced in this paper, is now used in a variety of state-of-the-art models in natural language processing and beyond. Transformers are the basis of the large language models (LLMs) we’re seeing today.
Anomaly Detection: Its Real-Life Uses and the Latest Advances
In this contributed article, Al Gharakhanian, Machine Learning Development Director, Cognityze, takes a look at anomaly detection in terms of real-life use cases, addressing critical factors, along with the relationship with machine learning and artificial neural networks.
Research Highlights: Deep Neural Networks and Tabular Data: A Survey
In this regular column, we take a look at highlights for important research topics of the day for big data, data science, machine learning, AI and deep learning. It’s important to keep connected with the research arm of the field in order to see where we’re headed. In this edition, we feature a new paper showing that for tabular data, algorithms based on gradient-boosted tree ensembles still outperform the deep learning models. Enjoy!
The Amazing Applications of Graph Neural Networks
In this contributed article, editorial consultant Jelani Harper points out that a generous portion of enterprise data is Euclidian and readily vectorized. However, there’s a wealth of non-Euclidian, multidimensionality data serving as the catalyst for astounding machine learning use cases.
HPC Predictions 2021: Quantum Beyond Qubits; the Rising Composable and Memory-Centric Tide; Neural Network Explosion; 5G’s Slow March
Annual technology predictions are like years: some are better than others. Nonetheless, many provide useful insight and serve as the basis for worthwhile discussion. Over the last few months we received a number of HPC and AI predictions for 2021, here are the most interesting and potentially valid. Let’s check in 12 months from now […]
What’s Under the Hood of Neural Networks?
In this contributed article, Pippa Cole, Science Writer at the London Institute for Mathematical Sciences, discusses new research on artificial neural networks that has added to concerns that we don’t have a clue what machine learning algorithms are up to under the hood. She highlights a new study that focuses on two completely different deep-layered machines, and found that in fact they did exactly the same thing, which was a huge surprise. It’s a demonstration of how little we understand about the inner workings of deep-layered neural networks.
Research Highlights: Attention Condensers
A group of AI researchers from DarwinAI and out of the University of Waterloo, announced an important theoretical development in deep learning around “attention condensers.” The paper describing this important advancement is: “TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices,” by Alexander Wong, et al. Wong is DarwinAI’s CTO.










