lu.se

Matematikcentrum

Lunds universitet

Denna sida på svenska This page in English

Research Projects

Continuous surveillance of animal welfare in housed dairy cows using image analysis technology

The project aims to identify biomarkers of animal health and welfare in order to increase knowledge of animal behavior and basic needs in general, and the importance of locomotion disorders in particular. With larger herds and higher production requirements less space is given for supervision of the individual animal. In milk production a reduction of the estimated management hours per cow and year has been halved from 40 h in a tie stall barn to 20 h in a cubicle system with automatic milking (AMS). This means that it will be more difficult to detect disease before it endangers animal welfare. We also know that euthanized and dead cows increase in cubicle systems, which is an indication that the disease was not detected in time and that the animal suffered unnecessarily. Using image analysis the animals are supervised with cameras and the information is analyzed mathematically. With so called algorithms the animals are identified as well as their positions, movements and interactions with other animals and the equipment. The system can identify animals that have disease problems or disturbed behavior in order to be able to treat these in time or correct the cause of the disorders before it's gone too far. In addition, the system can also be used to evaluate and correct risk factors for disturbed animal behavior that affects the function of the system and threaten to cause disease problems or disturbed behavior. Healthy, sustainable cows also mean less environment load.

Main applicant: 

Christer Bergsten
SLU Biosystem och teknologi

Co-applicants:

Kalle Åström
LU, Matematikcentrum 

Anders Herlin
SLU, Biosystem och teknologi 

Funded by FORMAS. 
Period: 2014-2016.

DOGS (SWElife)

Prostatacancer är mannens vanligaste cancerform i Sverige näst ytlig hudcancer. Korrekt karakterisering av tumören avseende grad och stadium är viktigt för val av bästa behandling. Tumörgrad bedöms med Gleason-gradering som bedömer tumörernas växtmönster i prostatabiopsier och detta är den bästa biomarkören för prognos av prostatacancer i dag. Bedömningen är dock subjektiv och arbetskrävande. I detta projekt utvecklar vi en datoriserad och automatisk bedömning av Gleason-gradering för snabb, reproducerbar och objektiv bedömning av tumörgrad.

Förväntade effekter och resultat

Datoriserade bildanalyser av Gleason-grad blir ett hjälpmedel för patologer, för snabb och exakt diagnostik av prostatacancer. Det kan leda till ökad reproducerbarhet och mindre variation mellan olika bedömare avseende tumörens aggressivitet. Det slutliga målet är att öka precisionen i individualiserad diagnos och behandling vid prostatacancer samt reducerade kostnader för samhället.

Planerat upplägg och genomförande

Datoriserad bildanalys utvecklas för bedömning av Gleason-grad i prostatabiopsier och prostatektomipreparat baserat på inskannade bilder från olika sjukhus. Programmet ska valideras och jämföras med utlåtande från erfarna patologer. Programmet ska kommersialiseras av Sectra.

ELLIIT

ELLIIT is a strategic research environment funded by the Swedish government in 2010, as part of its initiative to support strong research in information technology and mobile communications.  ELLIIT has four partners: Linköping University, Lund University, Halmstad University and Blekinge Institute of Technology.   ELLIIT constitutes a platform for both fundamental and applied research, and for cross-fertilization between disciplines and between academic researchers and industry experts. ELLIIT stands out by the quality and visibility of its publications, and its ability to attract and retain top talented researchers, and aims at being recognized as a top international research organization.

eSSENCE

Global Indoor Positioning in 3D

Vinnova funded project together with Combain

InDeV (In Depth Analysis of Vulnerable Road Users)

An example of what a traffic safety AI may do, in this case detecting vehicles and vulnerable road users in overhead surveillance videos.

This project is motivated by the need for an automatic system that can ease the task of annotating massive amounts of traffic data. This is highly relevant analysis of traffic data in general and highly desirable while search for near misses in traffic flows. Design of a watchdog system, utilizing video data from traffic that aims at removing huge chunks of video data where no events/interactions are occurring is one goal. This reduces the amount of video data that has to be manually annotated. The second system investigation aims at a fully automated detection and tracking system that keeps track of all objects in the scene and automatically detects traffic events of interest. The project is cross-discipline with joint work with traffic researchers and the consortium includes seven research organizations in Europe.

Positioning Lab at MAPCI

Lund Positioning Lab is now opening – a research laboratory specialising in positioning. The lab is part of MAPCI – the Mobile and Pervasive Computing Institute at Lund University. The aim is to gather all research specialising in positioning at the University, to give business and industry an opportunity to use state-of-the-art laboratory premises and offer close contact with academic research in the field.

Robust Methods for 3D-Reconstruction of Static and Non-Static Objects, Scenes and Environments

Semantic Mapping and Visual Navigation for Smart Robots

Why is it that today’s autonomous systems for visual inference tasks are often restricted to a narrow set of scene types and controlled lab settings? Examining the best performing perceptual systems reveals that each inference task is solved with a specialized methodology. For instance, object recognition and 3D scene reconstruction, despite being strongly connected problems, are treated independently and an integrated theory is lacking. We believe that in order to reach further, it is necessary to develop smart systems that are capable of integrating the different aspects of vision in a collaborative manner. We gather expertise from computer vision, machine learning, automatic control and optimization with the ambitious goal of establishing such an integrated framework. The research is structured into four work packages: 1) scene modelling, 2) visual recognition, 3) visual navigation and 4) system integration to achieve a perceptual robotic system for exploration and learning in unknown environments. As a demonstrator, we will construct an autonomous system for visual inspection of a supermarket using small-scale, low-cost quadcopters. The system goes well beyond the current state-of-the-art and will provide a complete solution for semantic mapping and visual navigation. The basic research outcomes are relevant to a wide range of industrial applications including self-driving cars, unmanned surface vehicles, street-view modelling and flexible inspection in general.

Sony - research collaborations on various computer vision topics

Quality Enhancement in Laboratory Tests using Image Analysis

This project aims to implement and further develop the use of image analysis in laboratory. By imaging more efficient laboratory operations. Additional effects, improved quality and increased traceability. The project is a continuation of SBUF Project 12 275 "Use of imaging in evaluating the adequacy of the roller bottle method". This section extends the project to include more methods, which creates synergies in the research but also synergy in the implementation of technology in the laboratory. The project aims at developping methods that could be in regular asphalt laboratory use. The main idea is to use standard components. Image analysis programs must, however, be developed for each application. The project has both a purely scientific approach but also a large part of the drafting procedure manuals and practical recommendations.

The aim of this project is to study the geometry and algebra of multiple camera systems. During the last decade there has been many attempts at making fully automatic structure and motion systems for ordinary camera systems. Much is known about minimal cases, feature detection, tracking and structure and motion recovery for ordinary cameras. Many automatic systems rely on small image motions in order to solve the correspondence problem. In combination with most cameras' small fields of view, this limits the way the camera can be moved in order to make good 3D reconstruction. The problem is significantly more stable with a large field of view. This has spurred research in so called omnidirectional or non-central cameras. A difficulty with ordinary cameras with or without large field of view is the inherent ambiguities that exists for structure and motion problem for ordinary cameras. There are ambiguous configurations for which structure and motion recovery is impossible.

Funded by the Development Fund of the Swedish Construction Industry (SBUF). 
Principal Investigator: Anders Heyden.
Period: 2010-2015.

Visuell SLAM

WASP