Seminar in Computational Linguistics

  • Date: –15:00
  • Location: 2-0024 & https://uu-se.zoom.us/j/63619411634
  • Lecturer: Anders Hast, Robin Strand and Carolina Wählby
  • Contact person: Gongbo Tang
  • Seminarium

Presentations of the Visual Information and Interaction group from the IT department

Carolina Wählby

Digital image processing and analysis makes it possible to use a microscope not only as a means of producing pretty pictures, but also as a tool to extract quantitative measurements from experiments involving biological and biomedical samples. Automatically extracted quantitative information becomes even more important when conducting large-scale experiments, or when data is too complex to decipher visually.  Today, we see how artificial intelligence and deep learning forwards fields like image cytometry and digital pathology. We also see how combinations of imaging modalities and unsupervised learning, rather than tedious manual annotations, may be used to train networks. 

 

 

Robin Strand

The massive amount of medical image data being made available in both research and clinical work today is often too big to be parsed by human experts, and this puts high demands on physician's processing of the image data. Computer-aided tools have a great potential for a sustainable work situation for physicians and for generating disease understanding. Computer assisted methods often perform at least as well as human experts on well-defined problems where it is possible to quantify performance by a loss function. 

In this presentation, an overview of ongoing projects in medical image processing will be given. We develop methods designed to efficiently find patterns in large scale medical image data, utilizing components from big data analysis and artifical intelligence methodology. One such method is Imiomics, which enables statistical analyses of relations between whole body image image data in large cohorts and other non-imaging data, at an unprecedented level of detail/spatial resolution

 

Anders Hast

Fast and easy transcription of handwritten documents 

Printed books can be converted into searchable machine encoded text using Optical Character Recognition (OCR). However, handwritten text is much harder to convert due to the large variation in handwriting style between persons, since every person will always inevitably write the same words with a small variation in size etc. Handwritten Text Recognition (HTR) has therefore emerged as an active research field to solve the problem of automatic word recognition and text conversion.

Transcription is a tedious time consuming task and several applications exist that facilitates the process, but usually require rather large training data that first needs to be transcribed and annotated by hand. We are therefore  developing a framework for fast semi-automatic collection of words, which even allows for a group of users to transcribe a text in arbitrary word order. This will help in finding words much faster for subsequent learning and it also makes it possible to search in not yet transcribed document collections. Examples from ongoing research projects will be presented and it will be shown where we could need NLP.