We have had our first lecture and have been set the task of researching into Empirical Software Engineering. The keep track of our findings, we have been asked to keep an online blog. This will also help to create a conversation about what we are finding out.
Empirical Software Engineering (ESE)
ESE is a topic I have not heard before. From what I understand, ESE is the research into helping businesses find or create software to suit their needs; a process of looking at a piece of software or technology to decide whether it fits the requirements of the business at hand.
There is an International Symposium on Empirical Software Engineering and Measurement held across the world.
Their objective is to provide a forum where researchers, practitioners, and educators can report and discuss the most recent research results, innovations, trends, experiences, and concerns.
I believe they are going to create a large collective of symposium papers from the event, into one large journal. They link to a page of papers so far…
This paper looks into data fusion for feature location. Data fusion is the process by which information from multiple sources is combined to yield better results, than if the information sources are used individually. The idea of the paper is to help identify the source code that implements specific functionality in software.
In software engineering folklore, ‘clones’ are considered bad programming practice. They are also identified as a ‘bad smell’ (Fowler et al. 1999) and a big cause to project maintenance difficulties. The paper analyses the relationship between cloning and defect proneness.
This paper describes how empirical research has shown that Free/Libre/Open Source Software (FLOSS) developers tend to cluster around two roles: ‘core’ contributors and ‘peripheral’ developers.
A different mind-set is needed when finding security vulnerabilities compared to general faults in software. The paper looks at whether fault prediction models can be used for vulnerability prediction or if specialized vulnerability prediction models should be developed when both models are built with traditional metrics of complexity, code churn, and fault history.
The paper on ‘clones’ intrigues me, as would like look further into what the bad programming practices are. Ultimately, I would like to look into web apps that use physics engines or the possibility of web pages turning 3D. I will try to see if I can find any information on that subject.