Thank you so much, Thomas (Diplom, 2014) and Daniel (Diplom, 2014), for hosting our alumni event in your office loft!
Abstract: The patch-flow analysis offers companies the possibility to analyze the own collaboration of their organisational units with company-internal reference sources. Due to the diversity of required data sources, data can sometimes only be collected by hand. The monitoring of completeness and accuracy has not been established so far. This thesis is used in the investigation to determine which characteristics of data quality are of interest and how manually collected data influences on completeness and accuracy. A goal-question-metric model is been developed in order to evaluate patch-flow data with regard to completeness and accuracy. On the basis of precise measured values, the model will be evaluated and discussed.
Keywords: Inner source, patch-flow, data quality
PDFs: Work description
Reference: Jörn Rechenburg. Bewertung von Fehlerfreiheit und Vollständigkeit gemessener Patch-Flow Daten. Bachelor Thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg: 2016.
The research project NetzDatenStrom has finally started. NetzDatenStrom is funded by the Federal Ministry of Economic Affairs and Energy in the context of the 6th energy research program. Experts on research and development, software producers of network control systems as well as grid operators and it-experts in the energy sector work together for integrating standard Big-Data solutions into existing network control systems. The project covers a three year period and will be done by a consortium comprised of network control system vendors PSI AG, KISTERS AG and BTC AG, grid operator EWE NETZ GmbH, OFFIS-Institute for Information Technology (consortium leader), Friedrich-Alexander-University Erlangen-Nuremberg and the Institute for Multimedia and Interactive Systems at University Lübeck. NetzDatenStrom will be supported by openKONSEQUENZ and provides possible contributions to the openKONSEQUENZ-platform.
The official Kick-Off meeting and workshop took place on October 27th at OFFIS in Oldenburg. First task is to specify practical and fundamental Big-Data use cases and establish a foundation for upcoming work steps. In context of NetzDatenStrom, the Open Source Research Group will exercise the integration of external data sources into existing network control systems and investigate the exploitation potential of open source software developed in a consortium.
Abstract: This master thesis proposes concept and implementation of a microservice-based architecture for the Sweble Hub software. In future, with this microservice-based architecture, Sweble should be deployed in the cloud in order to manage a big workload from many users accessing the wiki. The thesis gives an overview about the microservice architecture pattern and which additional components are necessary because of the distributed setting. A concept is introduced in which way the current architecture of Sweble can be sliced into microservices. From this concept two microservices are implemented. Finally, the concept and implementation of the microservice architecture is evaluated for its suitability. It is shown, that Sweble fits into the microservice pattern. However, microservices are not a silver bullet and with the architecture style some complexities are introduced into the system because of the distributed environment.
Keywords: Sweble, WOM, Wikipedia, Microservice, Scalability
Reference: Christian Happ. Preparing the Sweble Hub Software for the Cloud. Master Thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg: 2016.
Abstract: Despite the lack of crucial features, Wiki Markup is still the primary data format in Wikipedia. The Wiki Object Model (WOM) features a modern alternative based on a tree structure. The use of a graph-based Storage for integrating WOM as the primary data format in Wikipedia seems likely. Managing the immense revision history of Wikipedia articles is one of many problems when facing this approach. In most cases, the difference between a revision and its successor is small. Hence, there are many redundancies inside the database. To solve this problem we have to reduce the amount of redundancies. For this purpose an algorithm was designed connecting nearby revision graphs and reusing parts of the predecessor graph. Moreover, strategies for traversing WOM resources are introduced and user-defined edges between two arbitrary nodes are established. Multiple tests with real Wikipedia articles are performed for evaluating performance and storage savings. Thereby different configurations are tested. Redundancies between nearby revisions are stripped down to a minimum when using the graph-based storage for Wikipedia articles. In addition, all the advantages provided by WOM are given.
Keywords: Sweble, WOM, Wikipedia, graph database
Reference: Daniel Knogl. Design and Implementation of Graph-based Storage for Wikipedia Articles. Master Thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg: 2016.
We will host an industry talk on “Usability in Product Development” in PROD, our product management course. This talk is open to the public.
- by: Dipl. Ing (FH) Pablo Munoz Ibarra (Lead of the Usability Team SIMATIC TIA Portal at Siemens AG Nürnberg)
- about: Usability in Product Development – From the Idea to a Product Release
- on: November 23rd, 2016, 10:15 Uhr
- at: Room K1-119 – Brose-Saal, Erwin-Rommel-Straße 60, 91058 Erlangen
- as part of: PROD
Abstract: The presentation will show the different phases from the first product idea, through the development until the product release. The presentation will also show many examples and real cases happened during the design of the Usability of a complex SW Product for the automation industry.
Speaker: Pablo Munoz Ibarra is the Lead of the Usability Team SIMATIC TIA Portal. He has been working on innovation of automation concepts and products for more than 25 years. As a Usability Manager in Product Management, he designs the User Experience of Siemens automation SW for PLC and HMI.
Abstract: Representational State Transfer (REST) is an efficient and by now established architectural style for distributed hypermedia systems. However, REST has not been designed for more than short-term offline operations, yet many applications must keep functioning when going offline for more than a few seconds. Burdening the application with knowledge about offline status is undesirable. We define a function to derive a finite-state machine for the client side based on a formal model to describe RESTful systems as finite-state machine. We then extend existing caching approaches for offline operation so that a client-side proxy can transparently hide the offline status from the application for all derived states. We validate our solution with a proxy layer that covers all state-model derived test cases. Using our model and proxy, clients do not have to know and worry about whether they are online or offline.
Keywords: REST, hypermedia, offline capability
Reference: Tobias Fertig. Towards Offline Support for Restful Applications. Master Thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg: 2016.
Abstract: Requirements elicitation is an important factor in software engineering. Mainly the information needed is elicited through interviews and other qualitative sources. The analysis that follows is often an ad-hoc process that relies on expertise of the analyst(s) and therefore is hardly replicable. Additionally, the process is not transparent as the resulting modeling elements cannot be mapped to the initial data. First attempts to solve this issues by adapting the clearly defined steps of Qualitative Data Analysis (QDA) suggest that the approach should be followed up. In order to further formalize the process this thesis suggests a metamodel which allows to derive structure and behavior models from the same coding process. The metamodel is derived by analyzing an existing metamodel and by comparing different existing coding systems and their resulting modeling artifacts. The metamodel is extended with a rule system and tested on an exemplary data set. For validation the resulting models are compared to models from an ad-hoc modeling process and evaluated by experts. Results show that utilizing QDA with a code system metamodel allows for an increase in transparency and makes it more easy to vary detail levels of the derived models.
Keywords: Domain Model, Domain Analysis, Requirements Engineering, Qualitative Data Analysis, QDA
Reference: Sindy Salow. A Metamodel for Code Systems. Master Thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg: 2016.