Eya Ben Charrada, David Caspar, Cédric Jeanneret, Martin Glinz, Towards a benchmark for traceability, In: Joint ERCIM Workshop on Software Evolution (EVOL) and International Workshop on Principles of Software Evolution (IWPSE), Association for Computing Machinery, 2011-09-05. (Conference or Workshop Paper published in Proceedings)
Rigorously evaluating and comparing traceability link generation techniques is a challenging task. In fact, traceability is still expensive to implement and it is therefore difficult to find a complete case study that includes both a rich set of artifacts and traceability links among them. Consequently, researchers usually have to create their own case studies by taking a number of existing artifacts and creating traceability links for them. There are two major issues related to the creation of one's own example. First, creating a meaningful case study is time consuming. Second, the created case usually covers a limited set of artifacts and has a limited applicability (e.g., a case with traces from high-level requirements to low-level requirements cannot be used to evaluate traceability techniques that are meant to generate links from documentation to source code). We propose a benchmark for traceability that includes all artifacts that are typically produced during the development of a software system and with end-to-end traceability linking. The benchmark is based on an irrigation system that was elaborated in a book about software design. The main task considered by the benchmark is the generation of traceability links among different types of software artifacts. Such a traceability benchmark will help advance research in this field because it facilitates the evaluation and comparison of traceability techniques and makes the replication of experiments an easy task. As a proof of concept we used the benchmark to evaluate the precision and recall of a link generation technique based on the vector space model. Our results are comparable to those obtained by other researchers using the same technique. |
|
Irina Todoran, Zuheb Hussain, Niina Gromov, SOA Integration Modeling: An Evaluation of How SoaML Completes UML Modeling, In: 2011 15th IEEE International Enterprise Distributed Object Computing Conference Workshops, Helsinki, Finland, 2011-08-29. (Conference or Workshop Paper published in Proceedings)
With the new trend of shifting from traditional architectures towards Service-Oriented Architectures (SOAs) today, the need to model integration becomes increasingly apparent. This study analyzes two main approaches for SOA integration modeling: using Unified Modeling Language (UML) and Service-oriented architecture Modeling Language (SoaML); having as a fundament a literature study, an evaluation between the two is made, based on a defined set of criteria. The results show where SoaML brings added advantages to UML and why it may be worth being used on a large scale. |
|
Norbert Seyff, Gregor Ollmann, Manfred Bortenschlager, iRequire: Gathering end-user requirements for new apps, In: Requirements Engineering Conference (RE) 2011, Trento, 2011-08-29. (Conference or Workshop Paper)
Mobile devices such as Smartphones and Internet Tablets have become an integral part of our life. We can install applications providing various functionalities. Our research focuses on an application which enables end-users to blog requirements in situ. The gathered end-user needs can be seen as a starting point for the development of applications and the evolution of mobile platforms. |
|
Dustin Wüest, Bridging the gap between requirements sketches and semi-formal models, In: Doctoral Symposium of the 19th IEEE International Requirements Engineering Conference, 2011-08-29. (Conference or Workshop Paper)
State-of-the-art requirements modeling tools rely on predefined notations. In contrast, requirements engineers and stakeholders often sketch requirements in arbitrary notations during early elicitation phases. Engineers must then manually transform the sketches into semi-formal models, which is a time-consuming and error-prone task. We propose to investigate how early sketching and the transformation of sketches can be supported by a semi-automatic method that allows engineers to assign meaning to the sketches on the fly. Our tool-supported contribution is supposed to bridge the gap between sketches and semi-formal models. |
|
Irina Todoran, Semi-Automatic Service Integration of Telecom and Internet Services in a Service Delivery Platform, Aalto University Helsinki and Technische Universität München, School of Science and Technology, 2011. (Master's Thesis)
The purpose of this study was to identify the most appropriate way to (semi-)automatically
integrate external Internet and telecom services into a Service Delivery Platform (SDP) for a
Telecommunications operator, thus making them available to the community of developers. Another
aim was to show how the concept can be implemented in a service-oriented manner.
Both the literature review and design science methods were applied in this thesis. The literature review was conducted to identify and assess the existing service description languages
for Representational State Transfer (REST) architectures, and automatic code generation alternatives. For this, the concept-centric approach was used. The design science focused on implementing a prototype for the automatic code generation service, which shows how the concept
developed can be materialized. The artifact constructed consists of a use case based on the
Google Language Application Programming Interface (API). The literature review indicated that an Extensible Markup Language (XML)-based description meets the requirements for the service specifications on an SDP. Furthermore, the study revealed that an engine which uses the description as the data model and a template as input, processes the data, and outputs a Java file is the most suitable solution for the automatic source code generation. The template engine chosen to develop this was the Apache Velocity open source software project, and the automatically generated source code was packaged within an Open Services Gateway initiative framework (OSGi) bundle, which can be deployed on the SDP. The principal conclusion drawn was that semi-automatic code generation can be achieved on an SDP by using a template-driven approach. This solution meets the requirements regarding the generality of the project, and works for services with an indefinite number of compulsory and optional parameters. Therefore, the data model can be customized for any RESTful service which exposes its interface, and service-oriented architecture design principles such as loose coupling, composability and reusability are enabled. |
|
Anne Koziolek, Ralf Reussner, Towards a generic quality optimisation framework for component-based system models, In: 14th international ACM Sigsoft symposium on Component based software engineering, Association for Computing Machinery, New York, NY, USA, 2011-06-21. (Conference or Workshop Paper published in Proceedings)
Designing component-based systems (CBS) that exhibit a good trade-off between multiple quality criteria is hard. Even after functional design, many remaining degrees of freedom of different types (e.g.\ component allocation, component selection, server configuration) in the CBS span a large, discontinuous design space. Automated approaches have been proposed to optimise CBS models, but they only consider a limited set of degrees of freedom, e.g.\ they only optimise the selection of components without considering the allocation, or vice versa. We propose a flexible and extensible formulation of the design space for optimising any CBS model for a number of quality properties and an arbitrary number of degrees of freedom. With this design space formulation, a generic quality optimisation framework that is independent of the used CBS metamodel can apply multi-objective metaheuristic optimisation such as evolutionary algorithms. |
|
Anne Koziolek, Heiko Koziolek, Ralf Reussner, PerOpteryx: automated application of tactics in multi-objective software architecture optimization, In: ACM SIGSOFT conference -- QoSA and ACM SIGSOFT symposium -- ISARCS on Quality of software architectures -- QoSA and architecting critical systems -- ISARCS, Association for Computing Machinery, New York, NY, USA, 2011-06-20. (Conference or Workshop Paper published in Proceedings)
Designing software architectures that exhibit a good trade-off between multiple quality attributes is hard. Even with a given functional design, many degrees of freedom in the software architecture (e.g. component deployment or server configuration) span a large design space. In current practice, software architects try to find good solutions manually, which is time-consuming, can be error-prone and can lead to suboptimal designs.We propose an automated approach guided by architectural tactics to search the design space for good solutions. Our approach applies multi-objective evolutionary optimization to software architectures modelled with the Palladio Component Model. Software architects can then make well-informed trade-off decisions and choose the best architecture for their situation.To validate our approach, we applied it to the architecture models of two systems, a business reporting system and an industrial control system from ABB. The approach was able to find meaningful trade-offs leading to significant performance improvements or costs savings. The novel use of tactics decreased the time needed to find good solutions by up to 80\%. |
|
Markus Nöbauer, Norbert Seyff, Neil Maiden, Konstantinos Zachos, S3C: Using service discovery to support requirements elicitation in the ERP domain, In: CAiSE 2011, Springer, London, 2011-06-20. (Conference or Workshop Paper published in Proceedings)
Requirements Elicitation and Fit-Gap Analysis are amongst the most time and effort-consuming tasks in an ERP project. There is a potentially high rate of reuse in ERP projects as solutions are mainly based on standard software components and services. However, the consultants’ ability to identify relevant components for reuse is affected by the increasing number of services available to them. The work described in this experience paper focuses on providing support for consultants to identify existing solutions informing system design. We report the development of a tool-supported approach called S3C, based on Microsoft Sure Step methodology and SeCSE open source service discovery tools. The S3C approach is tailored to the needs of SME companies in the ERP domain and overcomes limitations of Sure Step. The initial application and evaluation of the S3C approach also allows presenting lessons learned. |
|
Markus Nöbauer, Norbert Seyff, Planning, Funding and Conducting Research to Address Challenges in ERP Projects, In: EPIC 2011, Brussels, 2011. (Conference or Workshop Paper published in Proceedings)
|
|
Heiko Koziolek, Bastian Schlich, Carlos Bilich, Roland Weiss, Steffen Becker, Klaus Krogmann, Mircea Trifu, Raffaela Mirandola, Anne Koziolek, An industrial case study on quality impact prediction for evolving service-oriented software, In: 33rd International Conference on Software Engineering, Association for Computing Machinery, New York, NY, USA, 2011-05-21. (Conference or Workshop Paper published in Proceedings)
Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Model-driven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection. |
|
Cédric Jeanneret, Martin Glinz, Benoit Baudry, Estimating footprints of model operations, In: 33rd International Conference on Software Engineering (ICSE 2011), 2011-05-21. (Conference or Workshop Paper published in Proceedings)
When performed on a model, a set of operations (e.g., queries or model transformations) rarely uses all the information present in the model. Unintended underuse of a model can indicate various problems: the model may contain more detail than necessary or the operations may be immature or erroneous. Analyzing the footprints of the operations - i.e., the part of a model actually used by an operation - is a simple technique to diagnose and analyze such problems. However, precisely calculating the footprint of an operation is expensive, because it requires analyzing the operation's execution trace.In this paper, we present an automated technique to estimate the footprint of an operation without executing it. We evaluate our approach by applying it to 75 models and five operations. Our technique provides software engineers with an efficient, yet precise, evaluation of the usage of their models. |
|
Deepak Dhungana, Norbert Seyff, Florian Graf, Research preview: Supporting end-user requirements elicitation using product line variability models, In: REFSQ 2011: 17th International Working Conference on Requirements Engineering: Foundation for Software Quality, Springer, Essen, 2011-03-28. (Conference or Workshop Paper published in Proceedings)
[Context and motivation] Product line variability models have been primarily used for product configuration purposes. We suggest that such models contain information that is relevant for early software engineering activities too. [Question/Problem] So far, the knowledge contained in variability models has not been used to improve requirements elicitation activities. State-of-the-art requirements elicitation approaches furthermore do not focus on the cost-effective identification of individual end-user needs, which, for example, is highly relevant for the customization of service-oriented systems. [Principal idea/results] The planned research will investigate how end-users can be empowered to document their individual needs themselves. We propose a tentative solution which facilitates end-users requirements elicitation by providing contextual information codified in software product line variability models. [Contribution] We present the idea of a “smart” tool for end-users allowing them to specify their needs and to customize, for example, a service-oriented system based on contextual information in variability models. |
|
Nauman A Qureshi, Norbert Seyff, Anna Perini, Satisfying user needs at the right time and in the right place: A research preview, In: REFSQ 2011, Springer, Essen, 2011-03-28. (Conference or Workshop Paper published in Proceedings)
[Context and motivation] Most requirements engineering (RE) approaches involve analysts in gathering end-user needs. However, we promote the idea that future service-based applications should support end-users in expressing their needs themselves, while the system should be able to respond to these requests by combining existing services in a seamless way. [Question/problem] Research tackling this idea is limited. In this research preview paper we sketch a plan to investigate the following research questions: How can end-users be facilitated by a system to express new needs (e.g. goals, preferences)? How can the continuous analysis of end-user needs result in an appropriate solution? [Principal ideas/results] In our recent research, we have started to explore the idea of involving end-users in RE. Furthermore, we have proposed an architecture that allows performing RE at run-time. The purpose of the planned research is to combine and extend our recent work and to come up with a tool-based solution, which involves end-users in realizing self-adaptive services. Our research objectives include to continuously capture, communicate and analyze end-user needs and feedback in order to provide a tailored solution. [Contribution] In this paper we give a preview on the planned work. After reporting on our recent work we present our research idea and the research objectives in more detail. |
|
Dustin Wüest, Martin Glinz, Flexible sketch-based requirements modeling, In: 17th International Working Conference on Requirements Engineering: Foundation for Software Quality, Springer-Verlag, Berlin, Heidelberg, 2011-03-28. (Conference or Workshop Paper published in Proceedings)
[Context and motivation] Requirements engineers and stakeholders like to create informal, sketchy models in order to communicate ideas and to make them persistent. They prefer pen and paper over current software modeling tools, because the former allow for any kind of sketches and do not break the creative flow. [Question/problem] To facilitate requirements management, engineers then need to manually transform the sketches into more formal models of requirements. This is a tedious, time-consuming task. Furthermore, there is a risk that the original intentions of the sketched models and informal annotations get lost in the transition. [Principal ideas/results] We present the idea for a seamless, tool-supported transition from informal, sketchy drafts to more formal models such as UML diagrams. Our approach uses an existing sketch recognizer together with a dynamic library of modeling symbols. This library can be augmented and modified by the user anytime during the sketching/modeling process. Thus, an engineer can start sketching without any restrictions, and can add both syntax and semantics later. Or the engineer can define a domain-specific modeling language with any degree of formality and adapt it on the fly. [Contribution] In this paper we describe how our approach combines the advantages of modeling with the freedom and ease of sketching in a way other modeling tools cannot provide. |
|
Daniel D Gouvêa, Cyro de A Assis D Muniz, Gilson A Pinto, Alberto Avritzer, Rosa M M Leão, Edmundo de Souza e Silva, Morganna C Diniz, Luca Berardinelli, Julius C B Leite, Daniel Mossé, Yuanfang Cai, Mike Dalton, Lucia Kapova, Anne Koziolek, Experience building non-functional requirement models of a complex industrial architecture, In: 2nd joint WOSP/SIPEW international conference on Performance engineering (ICPE'2011), Association for Computing Machinery, New York, NY, USA,, 2011-03-14. (Conference or Workshop Paper published in Proceedings)
In this paper, we report on our experience with the application of validated models to assess performance, reliability, and adaptability of a complex mission critical system that is being developed to dynamically monitor and control the position of an oil-drilling platform. We present real-time modeling results that show that all tasks are schedulable. We performed stochastic analysis of the distribution of tasks execution time as a function of the number of system interfaces. We report on the variability of task execution times for the expected system configurations. In addition, we have executed a system library for an important task inside the performance model simulator. We report on the measured algorithm convergence as a function of the number of vessel thrusters. We have also studied the system architecture adaptability by comparing the documented system architecture and the implemented source code. We report on the adaptability findings and the recommendations we were able to provide to the system's architect. Finally, we have developed models of hardware and software reliability. We report on hardware reliability results based on the evaluation of the system architecture. As a topic for future work, we report on an approach that we recommend be applied to evaluate the system under study software reliability. |
|
Catia Trubiani, Anne Koziolek, Detection and solution of software performance antipatterns in palladio architectural models, In: Proceeding of the second joint WOSP/SIPEW international conference on Performance engineering, Association for Computing Machinery, 2011-03-14. (Conference or Workshop Paper published in Proceedings)
Antipatterns are conceptually similar to patterns in thatthey document recurring solutions to common design problems.Performance Antipatternsdocument, from a performance perspective, common mistakes madeduring software development as well as their solutions.The definition of performance antipatterns concerns softwareproperties that can include static, dynamic, and deploymentaspects. Currently, such knowledge is only used by domain experts;the problem of automatically detecting and solving antipatternswithin an architectural model has not been experimented yet.In this paper we present an approach to automatically detect and solvesoftware performance antipatterns within the Palladio architectural models:the detection of an antipattern providesa software performance feedback to designers, since it suggeststhe architectural alternatives that actually allow to overcomespecific performance problems. We implemented theapproach and a case study is presented todemonstrate its validity. The system performance under studyhas been improved of 50\% by applying antipatterns' solutions. |
|
Proceedings of the 6th International Workshop on Models@run.time at the ACM/IEEE 14th International Conference on Model Driven Engineering Languages and Systems (MODELS 2011), Edited by: Nelly Bencomo, Gordon Blair, Betty Cheng, Robert France, Cédric Jeanneret, CEUR-WS.org, Wellington, New Zealand, 2011. (Proceedings)
|
|
Anne Koziolek, Qais Noorshams, Ralf Reussner, Focussing multi-objective software architecture optimization using quality of service bounds, In: Models in Software Engineering, Springer, Berlin / Heidelberg, p. 384 - 399, 2011. (Book Chapter)
Quantitative prediction of non-functional properties, such as performance, reliability, and costs, of software architectures supports systematic software engineering. Even though there usually is a rough idea on bounds for quality of service, the exact required values may be unclear and subject to trade-offs. Designing architectures that exhibit such good trade-off between multiple quality attributes is hard. Even with a given functional design, many degrees of freedom in the software architecture (e.g. component deployment or server configuration) span a large design space. Automated approaches search the design space with multi-objective metaheuristics such as evolutionary algorithms. However, as quality prediction for a single architecture is computationally expensive, these approaches are time consuming. In this work, we enhance an automated improvement approach to take into account bounds for quality of service in order to focus the search on interesting regions of the objective space, while still allowing trade-offs after the search. We compare two different constraint handling techniques to consider the bounds.To validate our approach, we applied both techniques to an architecture model of a component-based business information system. We compared both techniques to an unbounded search in 4 scenarios. Every scenario was examined with 10 optimization runs, each investigating around 1600 architectural candidates. The results indicate that the integration of quality of service bounds during the optimization process can improve the quality of the solutions found, however, the effect depends on the scenario, i.e. the problem and the quality requirements. The best results were achieved for costs requirements: The approach was able to decrease the time needed to find good solutions in the interesting regions of the objective space by 25% on average. |
|
Anne Koziolek, Heiko Koziolek, Lutz Prechelt, Ralf Reussner, From monolithic to component-based performance evaluation of software architectures. A series of experiments analysing accuracy and effort, Empirical Software Engineering, Vol. 16 (5), 2011. (Journal Article)
Background: Model-based performance evaluation methods for softwarearchitectures can help architects to assess design alternatives andsave costs for late life-cycle performance fixes. A recent trendis component-based performance modelling, which aims at creatingreusable performance models; a number of such methods have been proposedduring the last decade. Their accuracy and the needed effort formodelling are heavily influenced by human factors, which are so farhardly understood empirically. Objective: Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users.Methods: We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance.Results: For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component.Limitations: The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process.Conclusions: Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity. |
|
Norbert Seyff, Florian Graf, Mobile Werkzeuge als Sprachrohr für Endbenutzer , 2011. (Other Publication)
|
|