Projects (Giorgos Flouris)

Below you can find a list of the projects I have been involved in.
Sommelier: A Recommendation System for Wine-Food Matching (Internal FORTH project, 2017-2018)
Abstract: Our aim in this project is to develop a platform for matching wine with food. It is intended as an aid to restaurant customers that desire to match their chosen food with adequate wine.
Role: Researcher
Worked on: Project management support
Budget (FORTH/total): 50/50 (thousands of )
Holistic Benchmarking of Big Linked Data (RIA, H2020-ICT-16-2015, 2015-2018)
Abstract: Linked Data has gained significant momentum over the last years. It is now used at industrial scale in many sectors in which an increasingly large amount of rapidly changing data needs to be processed. HOBBIT is an ambitious project that aims to push the development of Big Linked Data (BLD) processing solutions by providing a family of industry-relevant benchmarks for the BLD value chain through a generic evaluation platform. We aim to make open deterministic benchmarks available to test the performance of existing systems and push the development of innovative industry-relevant solutions. The underlying data will mimic real industrial data assembled during the course of the project. At the beginning of the project, HOBBIT will work on roughly 1PB of real industry-relevant data from 4 different domains. The data will be extended through collaborations during the project. To push the use of the benchmarks, we will organize or join challenges that aim to measure the performance of technologies for the different steps of the BLD lifecycle. In contrast to existing benchmarks, we will provide modular and easily extensible benchmarks for all industry-relevant BLD processing steps that allow to assess whole suites of software that cover more than one step. The infrastructure necessary to run the evaluation campaigns will be made available. Our architecture will rely on web interfaces and cloud infrastructures to ensure scalability. The open HOBBIT platform will make human- and machine-readable, public periodic reports available. As exit strategy, the project will create an association after the second project year that will be sustained by the means of subscriptions from industry and academia and associated with existing benchmarking associations. The clear portfolio of added value for the members will be defined in the early project stages and disseminated throughout the evaluation campaigns.
Role: Researcher
Worked on: Link discovery analysis benchmark, versioning benchmark
Budget (FORTH/total): 355/4191 (thousands of )
MEthinks: A Platform for Managing Online Comments (Internal FORTH project, 2015-2017)
Abstract: Our aim in this project is to develop a platform for managing and visualizing online comments, using techniques from argumentation.
Role: Coordinator
Worked on: Project management, computational argumentation
Budget (FORTH/total): 100/100 (thousands of )
DIACHRON – Managing the Evolution and Preservation of the Data Web (IP, FP7-ICT-2011.4.3, 2013-2016)
Abstract: The Web has not only caused a revolution in communication; it also has completely changed the way we gather and use data. Open data – data that is available to everyone – is exponentially growing, and it has completely transformed the way we now conduct any kind of research or scholarship; it has changed the scientific method. The recent development of Linked Open Data has only increased the possibilities for exploiting public data. Given the value of open data how do we preserve it for future use? Currently, much of the data we use, e.g., demographic records, clinical statistics, personal and enterprise data as well as many scientific measurements cannot be reproduced. However, there is overwhelming evidence that we should keep such data where it is technically and economically feasible to do so. Until now this problem has been approached by keeping this information in fixed data sets and using extensions to the standard methods of disseminating and archiving traditional (paper) artifacts. Given the complexity, the interlinking and the dynamic nature of current data, especially Linked Open Data, radically new methods are needed. DIACHRON tackles this problem with a fundamental assumption: that the processes of publishing and preservation data are one and the same. Data are archived at the point of creation and archiving and dissemination are synonymous. DIACHRON takes on the challenges of evolution, archiving, provenance,annotation, citation, and data quality in the context of Linked Open Data and modern database systems. DIACHRON intends to automate the collection of metadata, provenance and all forms of contextual information so that data are accessible and usable at the point of creation and remain so indefinitely. The results of DIACHRON are evaluated in three large-scale use cases: open governmental data life-cycles, large enterprise data intranets and scientific data ecosystems in the life-sciences.
Role: Principal Investigator
Worked on: Change detection, data repairing
Budget (FORTH/total): 703/6386 (thousands of )
Multi-context Reasoning in Heterogeneous Environments (12SLO_ET29_1087, joint Greek-Slovak project, national funding, 2013-2015)
Abstract: The research objective of this work is the theoretical founding of formalisms of knowledge representation, belief revision and defeasible reasoning, with focus in contextual reasoning. These formalisms will be developed in order to be used in complex, dynamic and evolving application scenarios, such as the scenarios appearing in the Semantic Web and Ambient Intelligence applications. Our formalisms will be evaluated both theoretically, via determining their theoretical properties, and practically, via experiments evaluating their effectiveness and intuitiveness. Applicability, efficiency and integration of multiple heterogeneous components constitute critical characteristics, in which both partners will focus.
Role: Coordinator
Worked on: Conflict resolution, argumentation, non-monotonic reasoning
Budget (FORTH/total): 15/20 (thousands of )
An Interactive Learning Environment Fostering Creativity (IP, FP7-ICT-2011-8, 2012-2015)
Abstract: Innovation and creativity are predictors of success in a knowledge-based. Yet the “fuzziness” and unpredictability of the creative workflow remains an obstacle for effective ICT support. Tools that require users to formalize and structure their ideas and working processes to a degree at odds with creative practice are frequently rejected. The IdeaGarden project therefore starts from an understanding of creative problem solving as a complex and situated knowledge practice rather than as a set of well-defined methods and techniques. The project will develop a creative learning environment, capitalizing on the notion of visual information mash-ups as catalysts for creative working and learning. Adopting a practice-oriented approach will further the understanding of creativity in different settings and open up new perspectives for ICT support. This perspective will also give rise to new methods for seeding and cultivating creative knowledge practices in workplace and educational settings. To leverage the capabilities of current ICT systems, new interactions techniques will be devised that enable users to stay in control and collaboratively navigate the creative process, handling multiple types of resources. In addition, taking benefit of the “Linked Data” paradigm will provide new possibilities for creative search, the construction of knowledge as well as the reflection of the collaborative process. The R&D work starts with research into creative knowledge work in industry (at LEGO and EOOS) and education (at the Muthesius Academy of Fine Arts and Design). Based on the careful analysis of creative practices, we will envision and implement working demonstrators. Formative evaluation combined with a two-round development process will ensure that the IdeaGarden system fits its users’ needs while summative evaluation will validate the overall utility of the approach to promote nonlinear, non-standard thinking and problem solving among experts and novices.
Role: Researcher
Worked on: Change detection
Budget (FORTH/total): 335/3101 (thousands of )
Linked Data Benchmark Council (STREP, FP7-ICT-2011-8, 2012-2015)
Abstract: Non-relational data management is emerging as a critical need for the new data economy based on large,distributed, heterogeneous, and complexly structured data sets. This new data management paradigm also provides an opportunity for research results to impact young innovative companies working on new RDF and graph data management technologies to start playing a significant role in this new data economy. Standards and benchmarking are two of the most important factors for the development of new information technology, yet there is still no comprehensive suite of benchmarks and benchmarking practices for RDF and graph databases, nor is there an authority for setting benchmark definitions and auditing official results. Without them, the future development and uptake of these technologies is at risk by not providing industry with clear, user-driven targets for performance and functionality. The goal of the Linked Data Benchmark Council (LDBC) project is to create the first comprehensive suite of open, fair and vendor-neutral benchmarks for RDF/graph databases together with the LDBC foundation which will define processes for obtaining, auditing and publishing results. The core scientific innovation of LDBC is therefore to define meaningful benchmarks derived from a combination of actual usage scenarios combined with the technical insight of top database systems researchers and architects in the choke points of current technology. LDBC will bring together a broad community of researchers and RDF and graph database vendors to establish an independent authority, the LDBC foundation, responsible for specifying benchmarks, benchmarking procedures and verifying/publishing results. The forum created will become a long-surviving, industry supported association similar to the TPC. Vendors and user organisations will participate in order to influence benchmark design and to make use of the obvious marketing opportunities.
Role: Researcher
Worked on: Reasoning and instance matching benchmarks
Budget (FORTH/total): 259/3463 (thousands of )
Alliance for Permanent Access to the Records of Science in Europe Network (NoE, FP7-ICT-2009-4.1, 2011-2014)
Abstract: Digital preservation offers the economic and social benefits associated with the long-term preservation of information, knowledge and know-how for re-use by later generations. However, digital preservation has a great problem, namely that preservation support structures are built on projects which are short lived and is fragmented. The unique feature of APARSEN is that it is building on the already established Alliance for Permanent Access (APA), a membership organisation of major European stakeholders in digital data and digital preservation. These stakeholders have come together to create a shared vision and framework for a sustainable digital information infrastructure providing permanent access to digitally encoded information. To this self-sustaining grouping APARSEN will bring a wide range of other experts in digital preservation including academic and commercial researchers, as well as researchers in other cross-European organisations. The members of the APA and other members of the consortium already undertake research in digital preservation individually but even here the effort is fragmented despite smaller groupings of these organisations working together in specific EU and national projects. APARSEN will help to combine and integrate these programmes into a shared programme of work, thereby creating the pre-eminent virtual research centre in digital preservation in Europe, if not the World. The APA provides a natural basis for a longer term consolidation of digital preservation research and expertise. The Joint Programme of Activity will cover: (a) technical methods for preservation, access and most importantly re-use of data holdings over the whole lifecycle; (b) legal and economic issues including costs and governance issues as well as digital rights; (c) outreach within and outside the consortium to help to create a discipline of data curators with appropriate qualifications.
Role: Researcher
Worked on: Evolution of workflow provenance information
Budget (FORTH/total): 310/8665 (thousands of )
PlanetData (NoE, FP7-ICT-2009-5, 2010-2014)
Abstract: PlanetData aims to establish a sustainable European community of researchers that supports organizations in exposing their data in new and useful ways. The ability to effectively and efficiently make sense out of the enormous amounts of data continuously published online, including data streams, (micro)blog posts, digital archives, eScience resources, public sector data sets, and the Linked Open Data Cloud, is a crucial ingredient for Europe’s transition to a knowledge society. It allows businesses, governments, communities and individuals to take decisions in an informed manner, ensuring competitive advantages, and general welfare. Research will concentrate on three key challenges that need to be addressed for effective data exposure in a usable form at global scale. We will provide representations for stream-like data, and scalable techniques to publish, access and integrate such data sources on the Web. We will establish mechanisms to assess, record, and, where possible, improve the quality of data through repair. To further enhance the usefulness of data - in particular when it comes to the effectiveness of data processing and retrieval - we will define means to capture the context in which data is produced and understood - including space, time and social aspects. Finally, we will develop access control mechanisms - in order to attract exposure of certain types of valuable data sets, it is necessary to take proper account of its owner’s concerns to maintain control and respect for privacy and provenance, while not hampering non-contentious use. We will test all of the above on a highly scalable data infrastructure, supporting relational, RDF, and stream processing, and on novel data sets exposed through the network, and derive best practices for data owners. By providing these key precursors, complemented by a comprehensive training, dissemination, standardization and networking program, we will enable and promote effective exposure of data at planetary scale.
Role: Principal Investigator
Worked on: Data repairing, data provenance, access control, data privacy
Budget (FORTH/total): 328/3723 (thousands of )
Developing Knowledge-Practices Laboratory (IP, FP6-2004-IST-4, 2006-2011)
Abstract: The present project, Knowledge-practices Laboratory (KP-Lab) aims at facilitating innovative practices of working with knowledge (“knowledge practices”) in education and workplaces. KP-Lab presents a unifying view of human cognition based on an assumption that learning is not just individual knowledge acquisition or social interaction, but shared efforts of transforming ideas and social practices, i.e. knowledge-creation perspective. KP-Lab technology builds on emerging technologies, such as semantic web, real-time multimedia communication, ubiquitous access using wireless devices, and interorganisational computing. KP-Lab is a modular, flexible and extensible system consisting of a cluster of inter-operable applications. The user environment is a virtual shared space and set of tools that enables collaborative knowledge practises around shared knowledge artefacts. KP-Lab involves design experiments and longitudinal studies in schools, polytechnics, universities, teacher training, and professional organizations. A series of KP-Lab courses will be organized during which students will solve complex problems for real customers whether those are enterprises, public organizations, or research communities. Extended pilots involve scaling up of emerging good practices across large number of students. Prevailing practices of managing knowledge in professional organizations will be analyzed. Tools supporting reflection of interactive processes and in managing creation of knowledge and organizational transformation will be developed. KP-Lab technology will emerge through co-configuration of tools and co-evolution of practices between the participants and developers. A European multi-disciplinary research network will be established. Theories, pedagogies and technologies of KP-Lab will be disseminated across European education and workplaces. The technologies developed will be mostly based on open source technology to facilitate maximal dissemination.
Role: WP-leader
Worked on: Data evolution and repairing, versioning, change detection
Budget (FORTH/total): 952/10450 (thousands of )
Cultural, Artistic and Scientific knowledge for Preservation, Access and Retrieval (IP, FP6-2005-IST-2.5.10, 2006-2009)
Abstract: CASPAR will address the growing challenge facing society of a deluge of intrinsically fragile digital information, upon which it is increasingly dependent, by building a pioneering framework to support the end-to-end preservation “lifecycle” for scientific, artistic and cultural information, based on existing and emerging standards. The ambitious challenge to build up a common preservation framework for heterogeneous data and a variety of innovative applications will be achieved through the following high level objectives:
- to establish the foundation methodology applicable to a very wide range of preservation issues. The guiding principle of CASPAR is the application of the OAIS Reference Model.
- to research, develop and integrate advanced components to be used in a wide range of preservation activities. These components will be the building blocks of the CASPAR Framework.
- to create the CASPAR framework: the software platform that enables the building of services and applications that can be adapted to multiple areas.
The CASPAR consortium will demonstrate the validity of the CASPAR framework through heterogeneous testbeds, covering a wide range of disciplines from science to culture to contemporary arts and multi-media, providing a reliable common infrastructure which can be used or replicated in many more areas. The CASPAR consortium will seek to guarantee the future evolution of CASPAR. This ambitious goal will be pursued through: (a) the building of the CASPAR preservation user community creating consensus around the initiative and gathering a critical mass of potential users; (b) embedding the CASPAR framework and components within key memory organisations, both national and international. To achieve this, CASPAR brings together a consortium covering important digital holdings, with the appropriate extensive scientific, cultural and creative expertise, together with commercial partners, and world leaders in the field of information preservation.
Role: Researcher
Worked on: Digital Preservation
Budget (FORTH/total): 665/15078 (thousands of )


Last updated: 18/09/2017.