Abstract
In theory, a neural network can be trained to act as an artificial specification for a program by showing it samples of the programs executions. In practice, the training turns out to be very hard. Programs often operate on discrete domains for which patterns are difficult to discern. Earlier experiments reported too much false positives. This paper revisits an experiment by Vanmali et al. by investigating several aspects that were uninvestigated in the original work: the impact of using different learning modes, aggressiveness levels, and abstraction functions.
The results are quite promising.
The results are quite promising.
Original language | English |
---|---|
Title of host publication | Testing Software and Systems |
Subtitle of host publication | 30th IFIP WG 6.1 International Conference, ICTSS 2018, Cádiz, Spain, October 1-3, 2018, Proceedings |
Editors | Immaculada Medina-Bulo, Mercedes G. Merayo, Robert Hierons |
Place of Publication | Cham |
Publisher | Springer |
Pages | 135-141 |
Number of pages | 7 |
Edition | 1 |
ISBN (Electronic) | 978-3-319-99927-2 |
ISBN (Print) | 978-3-319-99926-5 |
DOIs | |
Publication status | Published - 7 Sept 2018 |
Publication series
Name | Lecture notes in computer science |
---|---|
Publisher | Springer |
Volume | 11146 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Keywords
- Neural network for software testing
- Automated oracles