Making Introductions for Job Interviews

As a human being, as a professional, and more recently as a professor, I’m happy to help people find jobs (time permitting). In fact, as a professor we have tagged HR professionals in our CRM database so that we can reach out easily to them. Still, introductions for job interviews require preparation on the side of the job seeker. There are a couple of things to consider.

The most common mistake that job seekers make is to ask me: Help me find a job in software engineering or product management or something else. Even if accompanied by a resume, what am I supposed to make of this? Pass on the resume to every company in the world?

The job of job seeking starts with the job seeker. They must find out where they want to go.

If they can’t, they should at least determine some companies of interest to them and provide them to me so that I can decide whether I can actually be of help.

Continue reading Making Introductions for Job Interviews

Clarification of “wissenschaftlicher Anspruch” (Scientific Aspiration) in Final Theses

Please see this new page on our website:

Final Thesis: The Uni1 Immune System for Continuous Delivery

Abstract: In this thesis we propose an immune system for the continuous delivery process of the Uni1 application. We add canary deployments and show how continuous monitoring can be used to detect negative behaviour of the application as a result of a recent deployment. Analyzing the Uni1 application is done via user defined health conditions, which are based on a number of metrics monitored by the immune system. In case of degraded behaviour, the immune system uses rollbacks to revert the Uni1 application to the last stable version. With the help of the immune system, application developers do no longer have to manually monitor whether a deployment completes successfully, but instead can rely on the immune system to gracefully handle deployment errors.

Keywords: Continuous delivery, continuous deployment, system monitoring, immune system

PDFs: Final thesis, Work description

Reference: Philipp Eichhorn. The Uni1 Immune System for Continuous Delivery. Master Thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg: 2016.

The 2016 Letter to Stakeholders (Year-end)

Welcome to the 2016 (year-end) letter to stakeholders of the Professorship of Open Source Software at the Friedrich-Alexander-University Erlangen-Nürnberg! (Download as PDF.)

  1. Highlights
  2. Research
  3. Teaching
  4. Industry
  5. Finances
  6. Alumni
  7. Thank you!


In 2016, we started multiple new research projects and intensified the work on existing ones: Inner source with Siemens Digital Factory, Healthineers (former Siemens Healthcare), and Continental Corporation, open source governance with a large unnamed multi-national company, and continuous deployment and open data integration with several energy distribution companies and academic partners.

Following a 2015 ICSE paper, we published two top-tier journal papers in 2016, one in Transactions on Software Engineering (TSE) and one in ACM Computing Surveys. The TSE paper led to a journal-first invited research talk at FSE 2016, next to ICSE one of the two top software engineering conferences.

Continue reading The 2016 Letter to Stakeholders (Year-end)

A Short Overview of Our Research Areas and Projects

Inner source software engineering

Inner source software development is software development utilizing open source best practices and processes for firm-internal software development. Engineering artifacts are laid open to the whole organization, inviting use and potential contribution across organizational boundaries. Inner source breaks down development silos and complements traditional top-down development structuring with bottom-up self-organization. Firms benefit from better code reuse and improved knowledge sharing, among other things.

Read more …

Continuous deployment

Continuous deployment is the process of “continuously” putting engineering innovation into production. In software, done right, continuous deployment leads to innovation release cycles that are counted in minutes rather than months or years. We are researching the full tool chain, practices, and processes, ranging from code repository to live monitoring of the continuously deployed system. A current focus is on the “immune system”, the system monitoring component that recognizes a bad deployment and rolls it back.

Read more …

Requirements engineering (QDAcity-RE)

Requirements engineering today is lacking “pre-RS” traceability—the ability to trace back requirements to stakeholders who asked for them and how conflicting requirements were resolved and decisions were made as to how to prioritize requirements. The QDAcity-RE project is utilizing qualitative data analysis (QDA) methods for determining requirements from “soft” input like interviews, workshops, and prior documentation. QDAcity-RE speeds up the elucidation process of high-quality, pre-RS-traceable requirements.

Read more …

Corporate open source governance

Open source governance (and compliance) are the firm-internal processes that ensure that a firm can benefit from using high-quality open source components in their products. Risks posed by the ungoverned use of open source in products are loss of exclusive ownership of the source code and patents associated with the software as well as potentially high fines or lawsuit settlement costs when dragged into court. We are guiding firms to proper open source governance using a best-practice handbook (for good governance).

Read more …

Open source business models

According to the forthcoming Bitkom manifest on open source, a successful software industry not only uses open source, but strategically leads open source projects. While contributing patches to non-differentiation open source components may be a no-brainer, deciding on when to join an open source foundation or start an open source project requires more thought. We are developing tools, practices, and processes for situation assessment and decision making on strategic leadership in open source software development.

Read more …

Distributed knowledge collaboration (Sweble)

Git and related projects have given the world a new decentralized way of collaborating around source code. The Sweble project applies a similar collaboration model to knowledge content, e.g. wikis. Use cases are cross-department collaboration, vendor-customer collaboration, and cross-company collaboration. By replacing the centralized model of knowledge collaboration with a decentralized one, Sweble gives different groups and companies independence of work while allowing for fast and efficient integration when desired.

Read more …

Final Thesis: Pricing at Everest SARL (Teaching Case)

Abstract: Pricing directly influences a company’s profitability, yet doesn’t receive too much attention in most business education. It is a highly complex topic, where a decision can make or break a business. This Harvard-style case study presents the story of a small French software company in a pricing crisis. Heavy discounting caused the company to enter a crisis after scaling down its consulting activities. The case study aims to teach students how to identify pricing issues, analyze price data, and which pricing best practices to follow. The company’s identity and data have been anonymized at the company’s request.

Keywords: Pricing, pricing policy, discounting

PDFs:  Final thesis

Reference: Ernst Haagsman. Pricing at Everest SARL: A Case Study. Master Thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg: 2016.

Impressions from the 2016 Alumni Event

Thank you so much, Thomas (Diplom, 2014) and Daniel (Diplom, 2014), for hosting our alumni event in your office loft!

Continue reading Impressions from the 2016 Alumni Event

Final Thesis: Bewertung von Fehlerfreiheit und Vollständigkeit gemessener Patch-Flow Daten

Abstract: The patch-flow analysis offers companies the possibility to analyze the own collaboration of their organisational units with company-internal reference sources. Due to the diversity of required data sources, data can sometimes only be collected by hand. The monitoring of completeness and accuracy has not been established so far. This thesis is used in the investigation to determine which characteristics of data quality are of interest and how manually collected data influences on completeness and accuracy. A goal-question-metric model is been developed in order to evaluate patch-flow data with regard to completeness and accuracy. On the basis of precise measured values, the model will be evaluated and discussed.

Keywords: Inner source, patch-flow, data quality

PDFs:  Work description

Reference: Jörn Rechenburg. Bewertung von Fehlerfreiheit und Vollständigkeit gemessener Patch-Flow Daten. Bachelor Thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg: 2016.

NetzDatenStrom Project Has Started

The research project NetzDatenStrom has finally started. NetzDatenStrom is funded by the Federal Ministry of Economic Affairs and Energy in the context of the 6th energy research program. Experts on research and development, software producers of network control systems as well as grid operators and it-experts in the energy sector work together for integrating standard Big-Data solutions into existing network control systems. The project covers a three year period and will be done by a consortium comprised of network control system vendors PSI AG, KISTERS AG and BTC AG, grid operator EWE NETZ GmbH, OFFIS-Institute for Information Technology (consortium leader), Friedrich-Alexander-University Erlangen-Nuremberg and the Institute for Multimedia and Interactive Systems at University Lübeck. NetzDatenStrom will be supported by openKONSEQUENZ and provides possible contributions to the openKONSEQUENZ-platform.

The official Kick-Off meeting and workshop took place on October 27th at OFFIS in Oldenburg. First task is to specify practical and fundamental Big-Data use cases and establish a foundation for upcoming work steps. In context of NetzDatenStrom, the Open Source Research Group will exercise the integration of external data sources into existing network control systems and investigate the exploitation potential of open source software developed in a consortium.