Research articles

Filter By:

Article Type
  • With the rapid development of natural language processing (NLP) models in the last decade came the realization that high performance levels on test sets do not imply that a model robustly generalizes to a wide range of scenarios. Hupkes et al. review generalization approaches in the NLP literature and propose a taxonomy based on five axes to analyse such studies: motivation, type of generalization, type of data shift, the source of this data shift, and the locus of the shift within the modelling pipeline.

    • Dieuwke Hupkes
    • Mario Giulianelli
    • Zhijing Jin
    AnalysisOpen Access
  • The number of publications in artificial intelligence (AI) has been increasing exponentially and staying on top of progress in the field is a challenging task. Krenn and colleagues model the evolution of the growing AI literature as a semantic network and use it to benchmark several machine learning methods that can predict promising research directions in AI.

    • Mario Krenn
    • Lorenzo Buffoni
    • Michael Kopp
    AnalysisOpen Access
  • Training a deep neural network can be costly but training time is reduced when a pre-trained network can be adapted to different use cases. Ideally, only a small number of parameters needs to be changed in this process of fine-tuning, which can then be more easily distributed. In this Analysis, different methods of fine-tuning with only a small number of parameters are compared on a large set of natural language processing tasks.

    • Ning Ding
    • Yujia Qin
    • Maosong Sun
    AnalysisOpen Access
  • Recent developments in deep learning have allowed for a leap in computational analysis of epigenomic data, but a fair comparison of different architectures is challenging. Toneyan et al. use GOPHER, their new framework for model evaluation and comparison, to perform a comprehensive analysis, exploring modelling choices of deep learning for epigenomic profiles.

    • Shushan Toneyan
    • Ziqi Tang
    • Peter K. Koo
    Analysis
  • Deep learning methods have in recent years shown promising results in characterizing proteins and extracting complex sequence–structure–function relationships. This Analysis describes a benchmarking study to compare the performances and advantages of recent deep learning approaches in a range of protein prediction tasks.

    • Serbulent Unsal
    • Heval Atas
    • Tunca Doğan
    Analysis
  • Many machine learning-based approaches have been developed for the prognosis and diagnosis of COVID-19 from medical images and this Analysis identifies over 2,200 relevant published papers and preprints in this area. After initial screening, 62 studies are analysed and the authors find they all have methodological flaws standing in the way of clinical utility. The authors have several recommendations to address these issues.

    • Michael Roberts
    • Derek Driggs
    • Carola-Bibiane Schönlieb
    AnalysisOpen Access
  • Several technology companies offer platforms for users without coding experience to develop deep learning algorithms. This Analysis compares the performance of six ‘code-free deep learning’ platforms (from Amazon, Apple, Clarifai, Google, MedicMind and Microsoft) in creating medical image classification models.

    • Edward Korot
    • Zeyu Guan
    • Pearse A. Keane
    AnalysisOpen Access
  • Many functions of RNA strands that do not code for proteins are still to be deciphered. Methods to classify different groups of non-coding RNA increasingly use deep learning, but the landscape is diverse and methods need to be categorized and benchmarked to move forward. The authors take a close look at six state-of-the-art deep learning non-coding RNA classifiers and compare their performance and architecture.

    • Noorul Amin
    • Annette McGrath
    • Yi-Ping Phoebe Chen
    Analysis