Research Publications

2018

Rens, G. ., Meyer, T. ., & Nayak, A. . (2018). Maximizing Expected Impact in an Agent Reputation Network. In 41st German Conference on AI, Berlin, Germany, September 24–28, 2018. Springer. Retrieved from https://www.springer.com/us/book/9783030001100

We propose a new framework for reasoning about the reputation of multiple agents, based on the partially observable Markov decision process (POMDP). It is general enough for the specification of a variety of stochastic multi-agent system (MAS) domains involving the impact of agents on each other’s reputations. Assuming that an agent must maintain a good enough reputation to survive in the system, a method for an agent to select optimal actions is developed.

@{198,
  author = {Gavin Rens and Tommie Meyer and A. Nayak},
  title = {Maximizing Expected Impact in an Agent Reputation Network},
  abstract = {We propose a new framework for reasoning about the reputation of multiple agents, based on the partially observable Markov decision process (POMDP). It is general enough for the specification of a variety of stochastic multi-agent system (MAS) domains involving the impact of agents on each other’s reputations. Assuming that an agent must maintain a good enough reputation to survive in the system, a method for an agent to select optimal actions is developed.},
  year = {2018},
  journal = {41st German Conference on AI, Berlin, Germany, September 24–28, 2018},
  pages = {99-106},
  month = {24/09-28/09},
  publisher = {Springer},
  isbn = {978-3-030-00110-0},
  url = {https://www.springer.com/us/book/9783030001100},
}
Casini, G. ., Eduardo, F. ., Meyer, T. ., & Varzinczak, I. . (2018). A Semantic Perspective on Belief Change in a Preferential Non-Monotonic Framework. In 16th International Conference on Principles of Knowledge Representation and Reasoning. United States of America: AAAI Press. Retrieved from https://dblp.org/db/conf/kr/kr2018.html

Belief change and non-monotonic reasoning are usually viewed as two sides of the same coin, with results showing that one can formally be defined in terms of the other. In this paper we investigate the integration of the two formalisms by studying belief change for a (preferential) non-monotonic framework. We show that the standard AGM approach to belief change can be transferred to a preferential non-monotonic framework in the sense that change operations can be defined on conditional knowledge bases. We take as a point of departure the results presented by Casini and Meyer (2017), and we develop and extend such results with characterisations based on semantics and entrenchment relations, showing how some of the constructions defined for propositional logic can be lifted to our preferential non-monotonic framework.

@{197,
  author = {Giovanni Casini and F. Eduardo and Tommie Meyer and Ivan Varzinczak},
  title = {A Semantic Perspective on Belief Change in a Preferential Non-Monotonic Framework},
  abstract = {Belief change and non-monotonic reasoning are usually viewed as two sides of the same coin, with results showing that one can formally be defined in terms of the other. In this paper we investigate the integration of the two formalisms by studying belief change for a (preferential) non-monotonic framework. We show that the standard AGM approach to belief change can be transferred to a preferential non-monotonic framework in the sense that change operations can be defined on conditional knowledge bases. We take as a point of departure the results presented by Casini and Meyer (2017), and we develop and extend such results with characterisations based on semantics and entrenchment relations, showing how some of the constructions defined for propositional logic can be lifted to our preferential non-monotonic framework.},
  year = {2018},
  journal = {16th International Conference on Principles of Knowledge Representation and Reasoning},
  pages = {220-229},
  month = {27/10-02/11},
  publisher = {AAAI Press},
  address = {United States of America},
  isbn = {978-1-57735-803-9},
  url = {https://dblp.org/db/conf/kr/kr2018.html},
}
van der Merwe, B. ., Berglund, M. ., & Bester, W. . (2018). Formalising Boost POSIX Regular Expression Matching. In International Colloquium on Theoretical Aspects of Computing. Springer. Retrieved from https://link.springer.com/chapter/10.1007/978-3-030-02508-3_6

Whereas Perl-compatible regular expression matchers typically exhibit some variation of leftmost-greedy semantics, those conforming to the posix standard are prescribed leftmost-longest semantics. However, the posix standard leaves some room for interpretation, and Fowler and Kuklewicz have done experimental work to confirm differences between various posix matchers. The Boost library has an interesting take on the posix standard, where it maximises the leftmost match not with respect to subexpressions of the regular expression pattern, but rather, with respect to capturing groups. In our work, we provide the first formalisation of Boost semantics, and we analyse the complexity of regular expression matching when using Boost semantics.

@{196,
  author = {Brink van der Merwe and Martin Berglund and Willem Bester},
  title = {Formalising Boost POSIX Regular Expression Matching},
  abstract = {Whereas Perl-compatible regular expression matchers typically exhibit some variation of leftmost-greedy semantics, those conforming to the posix standard are prescribed leftmost-longest semantics. However, the posix standard leaves some room for interpretation, and Fowler and Kuklewicz have done experimental work to confirm differences between various posix matchers. The Boost library has an interesting take on the posix standard, where it maximises the leftmost match not with respect to subexpressions of the regular expression pattern, but rather, with respect to capturing groups. In our work, we provide the first formalisation of Boost semantics, and we analyse the complexity of regular expression matching when using Boost semantics.},
  year = {2018},
  journal = {International Colloquium on Theoretical Aspects of Computing},
  pages = {99-115},
  month = {17/02},
  publisher = {Springer},
  isbn = {978-3-030-02508-3},
  url = {https://link.springer.com/chapter/10.1007/978-3-030-02508-3_6},
}
Ndaba, M. ., Pillay, A. ., & Ezugwu, A. . (2018). An Improved Generalized Regression Neural Network for Type II Diabetes Classification. In ICCSA 2018, LNCS 10963 (10963rd ed.). Springer International Publishing AG.

This paper proposes an improved Generalized Regression Neural Network (KGRNN) for the diagnosis of type II diabetes. Dia- betes, a widespread chronic disease, is a metabolic disorder that develops when the body does not make enough insulin or is unable to use insulin effectively. Type II diabetes is the most common type and accounts for an estimated 90% of cases. The novel KGRNN technique reported in this study uses an enhanced K-Means clustering technique (CVE-K-Means) to produce cluster centers (centroids) that are used to train the network. The technique was applied to the Pima Indian diabetes dataset, a widely used benchmark dataset for Diabetes diagnosis. The technique outper- forms the best known GRNN techniques for Type II diabetes diagnosis in terms of classification accuracy and computational time and obtained a classification accuracy of 86% with 83% sensitivity and 87% specificity. The Area Under the Receiver Operating Characteristic Curve (ROC) of 87% was obtained.

@inbook{195,
  author = {Moeketsi Ndaba and Anban Pillay and Absalom Ezugwu},
  title = {An Improved Generalized Regression Neural Network for Type II Diabetes Classification},
  abstract = {This paper proposes an improved Generalized Regression Neural Network (KGRNN) for the diagnosis of type II diabetes. Dia- betes, a widespread chronic disease, is a metabolic disorder that develops when the body does not make enough insulin or is unable to use insulin effectively. Type II diabetes is the most common type and accounts for an estimated 90% of cases. The novel KGRNN technique reported in this study uses an enhanced K-Means clustering technique (CVE-K-Means) to produce cluster centers (centroids) that are used to train the network. The technique was applied to the Pima Indian diabetes dataset, a widely used benchmark dataset for Diabetes diagnosis. The technique outper- forms the best known GRNN techniques for Type II diabetes diagnosis in terms of classification accuracy and computational time and obtained a classification accuracy of 86% with 83% sensitivity and 87% specificity. The Area Under the Receiver Operating Characteristic Curve (ROC) of 87% was obtained.},
  year = {2018},
  journal = {ICCSA 2018, LNCS 10963},
  edition = {10963},
  pages = {659-671},
  publisher = {Springer International Publishing AG},
  isbn = {3319951718},
}
Jembere, E. ., Rawatlal, R. ., & Pillay, A. . (2018). Matrix Factorisation for Predicting Student Performance. In 2017 7th World Engineering Education Forum (WEEF). IEEE.

Predicting student performance in tertiary institutions has potential to improve curriculum advice given to students, the planning of interventions for academic support and monitoring and curriculum design. The student performance prediction problem, as defined in this study, is the prediction of a student’s mark for a module, given the student’s performance in previously attempted modules. The prediction problem is amenable to machine learning techniques, provided that sufficient data is available for analysis. This work reports on a study undertaken at the College of Agriculture, Engineering and Science at University of KwaZulu-Natal that investigates the efficacy of Matrix Factorization as a technique for solving the prediction problem. The study uses Singular Value Decomposition (SVD), a Matrix Factorization technique that has been successfully used in recommender systems. The performance of the technique was benchmarked against the use of student and course average marks as predictors of performance. The results obtained suggests that Matrix Factorization performs better than both benchmarks.

@{194,
  author = {Edgar Jembere and Randhir Rawatlal and Anban Pillay},
  title = {Matrix Factorisation for Predicting Student Performance},
  abstract = {Predicting student performance in tertiary institutions has potential to improve curriculum advice given to students, the planning of interventions for academic support and monitoring and curriculum design. The student performance prediction problem, as defined in this study, is the prediction of a student’s mark for a module, given the student’s performance in previously attempted modules. The prediction problem is amenable to machine learning techniques, provided that sufficient data is available for analysis. This work reports on a study undertaken at the College of Agriculture, Engineering and Science at University of KwaZulu-Natal that investigates the efficacy of Matrix Factorization as a technique for solving the prediction problem. The study uses Singular Value Decomposition (SVD), a Matrix Factorization technique that has been successfully used in recommender systems. The performance of the technique was benchmarked against the use of student and course average marks as predictors of performance. The results obtained suggests that Matrix Factorization performs better than both benchmarks.},
  year = {2018},
  journal = {2017 7th World Engineering Education Forum (WEEF)},
  pages = {513-518},
  month = {13/11-16/11},
  publisher = {IEEE},
  isbn = {978-1-5386-1523-2},
}
Ndaba, M. ., Pillay, A. ., & Ezugwu, A. . (2018). A Comparative Study of Machine Learning Techniques for Classifying Type II Diabetes Mellitus.

Diabetes is a metabolic disorder that develops when the body does not make enough insulin or is not able to use insulin effectively. Accurate and early detection of diabetes can aid in effective management of the disease. Several machine learning techniques have shown promise as cost ef- fective ways for early diagnosis of the disease to reduce the occurrence of health complications arising due to delayed diagnosis. This study compares the efficacy of three broad machine learning approaches; viz. Artificial Neural Networks (ANNs), Instance-based classification technique, and Statistical Regression to diagnose type II diabetes. For each approach, this study proposes novel techniques that extend the state of the art. The new techniques include Artificial Neural Networks hybridized with an improved K-Means clustering and a boosting technique; improved variants of Logistic Regression (LR), K-Nearest Neighbours algorithm (KNN), and K-Means clustering. The techniques were evaluated on the Pima Indian diabetes dataset and the results were compared to recent results reported in the literature. The highest classification accuracy of 100% with 100% sensitivity and 100% specificity were achieved using an ensemble of the Boosting technique, the enhanced K-Means clustering algorithm (CVE-K-Means) and the Generalized Regression Neu- ral Network (GRNN): B-KGRNN. A hybrid of CVE-K-Means algorithm and GRNN (KGRNN) achieved the best accuracy of 86% with 83% sensitivity. The improved LR model (LR-n) achieved the highest classification accuracy of 84% with 72% sensitivity. The new multi-layer percep- tron (MLP-BPX) achieved the best accuracy of 82% and 72% sensitivity. A hybrid of KNN and CVE-K-Means (CKNN) technique achieved the best accuracy of 81% and 89% sensitivity. CVE- K-Means technique achieved the best accuracy of 80% and 61% sensitivity. The B-KGRNN, KGRNN, LR-n, and CVE-K-Means technique outperformed similar techniques in literature in terms of classification accuracy by 15%, 1%, 2%, and 3% respectively. CKNN and KGRNN tech- nique proved to have less computational complexity compared to the standard KNN and GRNN algorithm. Employing data pre-processing techniques such as feature extraction and missing value removal improved the classification accuracy of machine learning techniques by more than 11% in most instances.

@phdthesis{192,
  author = {Moeketsi Ndaba and Anban Pillay and Absalom Ezugwu},
  title = {A Comparative Study of Machine Learning Techniques for Classifying Type II Diabetes Mellitus},
  abstract = {Diabetes is a metabolic disorder that develops when the body does not make enough insulin or is not able to use insulin effectively. Accurate and early detection of diabetes can aid in effective management of the disease. Several machine learning techniques have shown promise as cost ef- fective ways for early diagnosis of the disease to reduce the occurrence of health complications arising due to delayed diagnosis. This study compares the efficacy of three broad machine learning approaches; viz. Artificial Neural Networks (ANNs), Instance-based classification technique, and Statistical Regression to diagnose type II diabetes. For each approach, this study proposes novel techniques that extend the state of the art. The new techniques include Artificial Neural Networks hybridized with an improved K-Means clustering and a boosting technique; improved variants of Logistic Regression (LR), K-Nearest Neighbours algorithm (KNN), and K-Means clustering. The techniques were evaluated on the Pima Indian diabetes dataset and the results were compared to recent results reported in the literature. The highest classification accuracy of 100% with 100% sensitivity and 100% specificity were achieved using an ensemble of the Boosting technique, the enhanced K-Means clustering algorithm (CVE-K-Means) and the Generalized Regression Neu- ral Network (GRNN): B-KGRNN. A hybrid of CVE-K-Means algorithm and GRNN (KGRNN) achieved the best accuracy of 86% with 83% sensitivity. The improved LR model (LR-n) achieved the highest classification accuracy of 84% with 72% sensitivity. The new multi-layer percep- tron (MLP-BPX) achieved the best accuracy of 82% and 72% sensitivity. A hybrid of KNN and CVE-K-Means (CKNN) technique achieved the best accuracy of 81% and 89% sensitivity. CVE- K-Means technique achieved the best accuracy of 80% and 61% sensitivity. The B-KGRNN, KGRNN, LR-n, and CVE-K-Means technique outperformed similar techniques in literature in terms of classification accuracy by 15%, 1%, 2%, and 3% respectively. CKNN and KGRNN tech- nique proved to have less computational complexity compared to the standard KNN and GRNN algorithm. Employing data pre-processing techniques such as feature extraction and missing value removal improved the classification accuracy of machine learning techniques by more than 11% in most instances.},
  year = {2018},
  volume = {MSc},
}
Waltham, M. ., Moodley, D. ., & Pillay, A. . (2018). Q-Cog: A Q-Learning Based Cognitive Agent Architecture for Complex 3D Virtual Worlds. Durban University.

Intelligent cognitive agents requiring a high level of adaptability should contain min- imal initial data and be able to autonomously gather new knowledge from their own experiences. 3D virtual worlds provide complex environments in which autonomous software agents may learn and interact. In many applications within this domain, such as video games and virtual reality, the environment is partially observable and agents must make decisions and react in real-time. Due to the dynamic nature of virtual worlds, adaptability is of great importance for virtual agents. The Reinforce- ment Learning paradigm provides a mechanism for unsupervised learning that allows agents to learn from their own experiences in the environment. In particular, the Q- Learning algorithm allows agents to develop an optimal action-selection policy based on their environment experiences. This research explores the potential of cognitive architectures utilizing Reinforcement Learning whereby agents may contain a library of action-selection policies within virtual environments. The proposed cognitive archi- tecture, Q-Cog, utilizes a policy selection mechanism to develop adaptable 3D virtual agents. Results from experimentation indicates that Q-Cog provides an effective basis for developing adaptive self-learning agents for 3D virtual worlds.

@phdthesis{190,
  author = {Michael Waltham and Deshen Moodley and Anban Pillay},
  title = {Q-Cog: A Q-Learning Based Cognitive Agent  Architecture for Complex 3D Virtual Worlds},
  abstract = {Intelligent cognitive agents requiring a high level of adaptability should contain min- imal initial data and be able to autonomously gather new knowledge from their own experiences. 3D virtual worlds provide complex environments in which autonomous software agents may learn and interact. In many applications within this domain, such as video games and virtual reality, the environment is partially observable and agents must make decisions and react in real-time. Due to the dynamic nature of virtual worlds, adaptability is of great importance for virtual agents. The Reinforce- ment Learning paradigm provides a mechanism for unsupervised learning that allows agents to learn from their own experiences in the environment. In particular, the Q- Learning algorithm allows agents to develop an optimal action-selection policy based on their environment experiences. This research explores the potential of cognitive architectures utilizing Reinforcement Learning whereby agents may contain a library of action-selection policies within virtual environments. The proposed cognitive archi- tecture, Q-Cog, utilizes a policy selection mechanism to develop adaptable 3D virtual agents. Results from experimentation indicates that Q-Cog provides an effective basis for developing adaptive self-learning agents for 3D virtual worlds.},
  year = {2018},
  volume = {MSc},
  publisher = {Durban University},
}
Dzitiro, J. ., Jembere, E. ., & Pillay, A. . (2018). A DeepQA Based Real-Time Document Recommender System. In Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2018. South Africa: SATNAC.

Recommending relevant documents to users in real- time as they compose their own documents differs from the traditional task of recommending products to users. Variation in the users’ interests as they work on their documents can undermine the effectiveness of classical recommender system techniques that depend heavily on off-line data. This necessitates the use of real-time data gathered as the user is composing a document to determine which documents the user will most likely be interested in. Classical methodologies for evaluating recommender systems are not appropriate for this problem. This paper proposed a methodology for evaluating real-time document recommender system solutions. The proposed method- ology was then used to show that a solution that anticipates a user’s interest and makes only high confidence recommendations performs better than a classical content-based filtering solution. The results obtained using the proposed methodology confirmed that there is a need for a new breed of recommender systems algorithms for real-time document recommender systems that can anticipate the user’s interest and make only high confidence recommendations.

@{189,
  author = {Joshua Dzitiro and Edgar Jembere and Anban Pillay},
  title = {A DeepQA Based Real-Time Document Recommender System},
  abstract = {Recommending relevant documents to users in real- time as they compose their own documents differs from the traditional task of recommending products to users. Variation in the users’ interests as they work on their documents can undermine the effectiveness of classical recommender system techniques that depend heavily on off-line data. This necessitates the use of real-time data gathered as the user is composing a document to determine which documents the user will most likely be interested in. Classical methodologies for evaluating recommender systems are not appropriate for this problem. This paper proposed a methodology for evaluating real-time document recommender system solutions. The proposed method- ology was then used to show that a solution that anticipates a user’s interest and makes only high confidence recommendations performs better than a classical content-based filtering solution. The results obtained using the proposed methodology confirmed that there is a need for a new breed of recommender systems algorithms for real-time document recommender systems that can anticipate the user’s interest and make only high confidence recommendations.},
  year = {2018},
  journal = {Southern Africa Telecommunication Networks and Applications Conference (SATNAC) 2018},
  pages = {304-309},
  month = {02/09-05/09},
  publisher = {SATNAC},
  address = {South Africa},
}
Harmse, H. ., Britz, K. ., & Gerber, A. . (2018). Informative Armstrong RDF datasets for n-ary relations. In Formal Ontology in Information Systems: 10th International Conference, Cape Town, South Africa. IOS Press.

The W3C standardized Semantic Web languages enable users to capture data without a schema in a manner which is intuitive to them. The challenge is that for the data to be useful, it should be possible to query the data and to query it efficiently, which necessitates a schema. Understanding the structure of data is thus important to both users and storage implementers: the structure of the data gives insight to users in how to query the data while storage implementers can use the structure to optimize queries. In this paper we propose that data mining routines can be used to infer candidate n-ary relations with related uniqueness- and null-free constraints, which can be used to construct an informative Armstrong RDF dataset. The benefit of an informative Armstrong RDF dataset is that it provides example data based on the original data which is a fraction of the size of the original data, while capturing the constraints of the original data faithfully. A case study on a DBPedia person dataset showed that the associated informative Armstrong RDF dataset contained 0.00003% of the statements of the original DBPedia dataset.

@{188,
  author = {Henriette Harmse and Katarina Britz and Aurona Gerber},
  title = {Informative Armstrong RDF datasets for n-ary relations},
  abstract = {The W3C standardized Semantic Web languages enable users to capture data without a schema in a manner which is intuitive to them. The challenge is that for the data to be useful, it should be possible to query the data and to query it efficiently, which necessitates a schema. Understanding the structure of data is thus important to both users and storage implementers: the structure of the data gives insight to users in how to query the data while storage implementers can use the structure to optimize queries. In this paper we propose that data mining routines can be used to infer candidate n-ary relations with related uniqueness- and null-free constraints, which can be used to construct an informative Armstrong RDF dataset. The benefit of an informative Armstrong RDF dataset is that it provides example data based on the original data which is a fraction of the size of the original data, while capturing the constraints of the original data faithfully. A case study on a DBPedia person dataset showed that the associated informative Armstrong RDF dataset contained 0.00003% of the statements of the original DBPedia dataset.},
  year = {2018},
  journal = {Formal Ontology in   Information Systems: 10th International Conference, Cape Town, South Africa},
  pages = {187-198},
  month = {17/09-21/09},
  publisher = {IOS Press},
}
Britz, K. ., & Varzinczak, I. . (2018). Context and rationality in defeasible subsumption. In Foundations of Information and Knowledge Systems: 10th International Symposium FoIKS 2018, Budapest, Hungary. Springer.

Description logics have been extended in a number of ways to support defeasible reasoning in the KLM tradition. Such features include preferential or rational defeasible concept subsumption, and defeasible roles in complex concept descriptions. Semantically, defeasible subsumption is obtained by means of a preference order on objects, while defeasible roles are obtained by adding a preference order to role interpretations. In this paper, we address an important limitation in defeasible extensions of description logics, namely the restriction in the semantics of defeasible concept subsumption to a single preference order on objects. We do this by inducing a modular preference order on objects from each preference order on roles, and use these to relativise defeasible subsumption. This yields a notion of contextualised rational defeasible subsumption, with contexts described by roles. We also provide a semantic construction for and a method for the computation of contextual rational closure, and present a correspondence result between the two.

@{187,
  author = {Katarina Britz and Ivan Varzinczak},
  title = {Context and rationality in defeasible subsumption},
  abstract = {Description logics have been extended in a number of ways to support defeasible reasoning in the KLM tradition. Such features include preferential or rational defeasible concept subsumption, and defeasible roles in complex concept descriptions. Semantically, defeasible subsumption is obtained by means of a preference order on objects, while defeasible roles are obtained by adding a preference order to role interpretations. In this paper, we address an important limitation in defeasible extensions of description logics, namely the restriction in the semantics of defeasible concept subsumption to a single preference order on objects. We do this by inducing a modular preference order on objects from each preference order on roles, and use these to relativise defeasible subsumption. This yields a notion of contextualised rational defeasible subsumption, with contexts described by roles. We also provide a semantic construction for and a method for the computation of contextual rational closure, and present a correspondence result between the two.},
  year = {2018},
  journal = {Foundations of Information and Knowledge Systems: 10th International Symposium FoIKS 2018, Budapest, Hungary},
  pages = {114-132},
  month = {14/05-18/05},
  publisher = {Springer},
}
Harmse, H. ., Britz, K. ., & Gerber, A. . (2018). Generating Armstrong ABoxes for ALC TBoxes. In Theoretical Aspects of Computing: 15th International Colloquium, Stellenbosch, South Africa. Springer.

A challenge in ontology engineering is the mismatch in ex- pertise between the ontology engineer and domain expert, which often leads to important constraints not being specified. Domain experts often only focus on specifying constraints that should hold and not on specify- ing constraints that could possibly be violated. In an attempt to bridge this gap we propose the use of “perfect test data”. The generated test data is perfect in that it satisfies all the constraints of an application domain that are required, including ensuring that the test data violates constraints that can be violated. In the context of Description Logic on- tologies we call this test data an “Armstrong ABox”, a notion derived from Armstrong relations in relational database theory. In this paper we detail the theoretical development of Armstrong ABoxes for ALC TBoxes as well as an algorithm for generating such Armstrong ABoxes. The proposed algorithm is based, via the ontology completion algorithm of Baader et al., on attribute exploration in formal concept analysis.

@{186,
  author = {Henriette Harmse and Katarina Britz and Aurona Gerber},
  title = {Generating Armstrong ABoxes for ALC TBoxes},
  abstract = {A challenge in ontology engineering is the mismatch in ex- pertise between the ontology engineer and domain expert, which often leads to important constraints not being specified. Domain experts often only focus on specifying constraints that should hold and not on specify- ing constraints that could possibly be violated. In an attempt to bridge this gap we propose the use of “perfect test data”. The generated test data is perfect in that it satisfies all the constraints of an application domain that are required, including ensuring that the test data violates constraints that can be violated. In the context of Description Logic on- tologies we call this test data an “Armstrong ABox”, a notion derived from Armstrong relations in relational database theory. In this paper we detail the theoretical development of Armstrong ABoxes for ALC TBoxes as well as an algorithm for generating such Armstrong ABoxes. The proposed algorithm is based, via the ontology completion algorithm of Baader et al., on attribute exploration in formal concept analysis.},
  year = {2018},
  journal = {Theoretical Aspects of   Computing: 15th International Colloquium, Stellenbosch, South Africa},
  pages = {211-230},
  month = {16/10-19/10},
  publisher = {Springer},
}
Berndt, J. ., Fischer, B. ., & Britz, K. . (2018). Scaling the ConceptCloud browser to large semi-structured data sets. In 14th African Conference on Research in Computer Science and Applied Mathematics, Stellenbosch, South Africa, Proceedings. HAL archives-ouvertes. Retrieved from https://hal.inria.fr/hal-01881376

Semi-structured data sets such as product reviews or event log data are simultaneously becoming more widely used and growing ever larger. This paper describes ConceptCloud, a flexible interactive browser for semi-structured datasets, with a focus on the recent trend of implementing server-based architectures to accommodate ever growing datasets. ConceptCloud makes use of an intuitive tag cloud visualization viewer in combination with an underlying concept lattice to provide a formal structure for navigation through datasets without prior knowledge of the structure of the data or compromising scalability. This is achieved by implementing architectural changes to increase the system’s resource efficiency.

@{185,
  author = {Joshua Berndt and Bernd Fischer and Katarina Britz},
  title = {Scaling the ConceptCloud browser to large semi-structured data sets},
  abstract = {Semi-structured data sets such as product reviews or event log data are simultaneously becoming more widely used and growing ever larger. This paper describes ConceptCloud, a flexible interactive browser for semi-structured datasets, with a focus on the recent trend of implementing server-based architectures to accommodate ever growing datasets. ConceptCloud makes use of an intuitive tag cloud visualization viewer in combination with an underlying concept lattice to provide a formal structure for navigation through datasets without prior knowledge of the structure of the data or compromising scalability. This is achieved by implementing architectural changes to increase the system’s resource efficiency.},
  year = {2018},
  journal = {14th African Conference on Research in   Computer Science and Applied Mathematics, Stellenbosch, South Africa, Proceedings},
  pages = {276- 283},
  month = {14/10-16/10},
  publisher = {HAL archives-ouvertes},
  url = {https://hal.inria.fr/hal-01881376},
}
Britz, K. ., & Varzinczak, I. . (2018). Preferential accessibility and preferred worlds. Journal of Logic, Language and Information, 27(2). Retrieved from https://doi.org/10.1007/s10849-017-9264-0

Modal accounts of normality in non-monotonic reasoning traditionally have an underlying semantics based on a notion of preference amongst worlds. In this paper, we motivate and investigate an alternative semantics, based on ordered accessibility relations in Kripke frames. The underlying intuition is that some world tuples may be seen as more normal, while others may be seen as more exceptional. We show that this delivers an elegant and intuitive semantic construction, which gives a new perspective on defeasible necessity. Technically, the revisited logic does not change the expressive power of our previously defined preferential modalities. This conclusion follows from an analysis of both semantic constructions via a generalisation of bisimulations to the preferential case. Reasoners based on the previous semantics therefore also suffice for reasoning over the new semantics. We complete the picture by investigating different notions of defeasible conditionals in modal logic that can also be captured within our framework.\footnote{A preliminary version of the work reported in this paper was presented at the Workshop on Nonmonotonic Reasoning.

@article{183,
  author = {Katarina Britz and Ivan Varzinczak},
  title = {Preferential accessibility and preferred worlds},
  abstract = {Modal accounts of normality in non-monotonic reasoning traditionally have an underlying semantics based on a notion of preference amongst worlds. In this paper, we motivate and investigate an alternative semantics, based on ordered accessibility relations in Kripke frames. The underlying intuition is that some world tuples may be seen as more normal, while others may be seen as more exceptional. We show that this delivers an elegant and intuitive semantic construction, which gives a new perspective on defeasible necessity. Technically, the revisited logic does not change the expressive power of our previously defined preferential modalities. This conclusion follows from an analysis of both semantic constructions via a generalisation of bisimulations to the preferential case. Reasoners based on the previous semantics therefore also suffice for reasoning over the new semantics. We complete the picture by investigating different notions of defeasible conditionals in modal logic that can also be captured within our framework.\footnote{A preliminary version of the work reported in this paper was presented at the Workshop on Nonmonotonic Reasoning.},
  year = {2018},
  journal = {Journal of Logic, Language and Information},
  volume = {27},
  pages = {133-155},
  issue = {2},
  publisher = {Springer},
  url = {https://doi.org/10.1007/s10849-017-9264-0},
}
Britz, K. ., & Varzinczak, I. . (2018). From KLM-Style Conditionals to Defeasible Modalities, and Back. Journal of Applied Non-Classical Logics, 28(1). Retrieved from https://doi.org/10.1080/11663081.2017.1397325

We investigate an aspect of defeasibility that has somewhat been overlooked by the non-monotonic reasoning community, namely that of defeasible modes of reasoning. These aim to formalise defeasibility of the traditional notion of necessity in modal logic, in particular of its different readings as action, knowledge and others in specific contexts, rather than defeasibility of conditional forms. Building on an extension of the preferential approach to modal logics, we introduce new modal operators with which to formalise the notion of defeasible necessity and distinct possibility, and that can be used to represent expected effects, refutable knowledge, and so on. We show how KLM-style conditionals can smoothly be integrated with our richer language. We also propose a tableau calculus which is sound and complete with respect to our modal preferential semantics, and of which the computational complexity remains in the same class as that of the underlying classical modal logic.

@article{182,
  author = {Katarina Britz and Ivan Varzinczak},
  title = {From KLM-Style Conditionals to Defeasible Modalities, and Back},
  abstract = {We investigate an aspect of defeasibility that has somewhat been overlooked by the non-monotonic reasoning community, namely that of defeasible modes of reasoning. These aim to formalise defeasibility of the traditional notion of necessity in modal logic, in particular of its different readings as action, knowledge and others in specific contexts, rather than defeasibility of conditional forms. Building on an extension of the preferential approach to modal logics, we introduce new modal operators with which to formalise the notion of defeasible necessity and distinct possibility, and that can be used to represent expected effects, refutable knowledge, and so on. We show how KLM-style conditionals can smoothly be integrated with our richer language. We also propose a tableau calculus which is sound and complete with respect to our modal preferential semantics, and of which the computational complexity remains in the same class as that of the underlying classical modal logic.},
  year = {2018},
  journal = {Journal of Applied Non-Classical Logics},
  volume = {28},
  pages = {92-121},
  issue = {1},
  publisher = {Taylor & Francis},
  url = {https://doi.org/10.1080/11663081.2017.1397325},
}
Price, C. S. ., Moodley, D. ., & Pillay, A. . (2018). Dynamic Bayesian decision network to represent growers’ adaptive pre-harvest burning decisions in a sugarcane supply chain. In Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT ’18). New York NY: ACM. Retrieved from https://dl.acm.org/citation.cfm?id=3278681

Sugarcane growers usually burn their cane to facilitate its harvesting and transportation. Cane quality tends to deteriorate after burning, so it must be delivered as soon as possible to the mill for processing. This situation is dynamic and many factors, including weather conditions, delivery quotas and previous decisions taken, affect when and how much cane to burn. A dynamic Bayesian decision network (DBDN) was developed, using an iterative knowledge engineering approach, to represent sugarcane growers’ adaptive pre-harvest burning decisions. It was evaluated against five different scenarios which were crafted to represent the range of issues the grower faces when making these decisions. The DBDN was able to adapt reactively to delays in deliveries, although the model did not have enough states representing delayed delivery statuses. The model adapted proactively to rain forecasts, but only adapted reactively to high wind forecasts. The DBDN is a promising way of modelling such dynamic, adaptive operational decisions.

@{181,
  author = {C. Sue Price and Deshen Moodley and Anban Pillay},
  title = {Dynamic Bayesian decision network to represent growers’ adaptive pre-harvest burning decisions in a sugarcane supply chain},
  abstract = {Sugarcane growers usually burn their cane to facilitate its harvesting and transportation.  Cane quality tends to deteriorate after burning, so it must be delivered as soon as possible to the mill for processing.  This situation is dynamic and many factors, including weather conditions, delivery quotas and previous decisions taken, affect when and how much cane to burn.  A dynamic Bayesian decision network (DBDN) was developed, using an iterative knowledge engineering approach, to represent sugarcane growers’ adaptive pre-harvest burning decisions.  It was evaluated against five different scenarios which were crafted to represent the range of issues the grower faces when making these decisions.  The DBDN was able to adapt reactively to delays in deliveries, although the model did not have enough states representing delayed delivery statuses.  The model adapted proactively to rain forecasts, but only adapted reactively to high wind forecasts.   The DBDN is a promising way of modelling such dynamic, adaptive operational decisions.},
  year = {2018},
  journal = {Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists (SAICSIT '18)},
  pages = {89-98},
  month = {26/09-28/09},
  publisher = {ACM},
  address = {New York NY},
  isbn = {978-1-4503-6647-2},
  url = {https://dl.acm.org/citation.cfm?id=3278681},
}

2017

van der Meulen, T. ., de Vries, M. ., & Gerber, A. . (2017). Demonstrating Approach Design Principles during the Development of a DEMO-based Enterprise Engineering Approach. In First International Workshop on Advanced Enterprise Modelling (AEM). Porto, Portugal: SCITEPRESS - Science and Technology Publications, Lda. http://doi.org/10.5220/0006382204710482

Enterprise engineering (EE) aims to address several phenomena in the evolution of an enterprise. One prominent phenomenon is the inability of the enterprise as a complex socio-technical system to adapt to rapidly-changing environments. In response to this phenomenon, many enterprise design approaches (with their own methodologies, frameworks, and modelling languages) emerged, but with little empirical evidence about their effectiveness. Furthermore, research indicates that multiple enterprise design approaches are used concurrently in industry, with each approach focusing on a sub-set of stakeholder concerns. The proliferating design approaches do not necessarily explicate their conditional use in terms of contextual prerequisites and demarcated design scope; and this also impairs their evaluation. Previous work suggested eleven design principles that would guide approach designers when they design or enhance an enterprise design approach. The design principles ensure that researchers contribute to the systematic growth of the EE knowledge base. This article provides a demonstration of the eleven principles during the development of a DEMO-based enterprise engineering approach, as well as a discussion to reflect on the usefulness of the principles

@{454,
  author = {Thomas van der Meulen and Marne de Vries and Aurona Gerber},
  title = {Demonstrating Approach Design Principles during the Development of a DEMO-based Enterprise Engineering Approach},
  abstract = {Enterprise engineering (EE) aims to address several phenomena in the evolution of an enterprise. One prominent phenomenon is the inability of the enterprise as a complex socio-technical system to adapt to rapidly-changing environments. In response to this phenomenon, many enterprise design approaches (with their own methodologies, frameworks, and modelling languages) emerged, but with little empirical evidence about their effectiveness. Furthermore, research indicates that multiple enterprise design approaches are used concurrently in industry, with each approach focusing on a sub-set of stakeholder concerns. The proliferating design approaches do not necessarily explicate their conditional use in terms of contextual prerequisites and demarcated design scope; and this also impairs their evaluation. Previous work suggested eleven design principles that would guide approach designers when they design or enhance an enterprise design approach. The design principles ensure that researchers contribute to the systematic growth of the EE knowledge base. This article provides a demonstration of the eleven principles during the development of a DEMO-based enterprise engineering approach, as well as a discussion to reflect on the usefulness of the principles},
  year = {2017},
  journal = {First International Workshop on Advanced Enterprise Modelling (AEM)},
  pages = {471-482},
  month = {26/04-29/04},
  publisher = {SCITEPRESS - Science and Technology Publications, Lda.},
  address = {Porto, Portugal},
  isbn = {978-989-758-249-3},
  doi = {10.5220/0006382204710482},
}
Gerber, A. ., Baskerville, R. ., & van der Merwe, A. . (2017). A Taxonomy of Classification Approaches in IS Research. In AMCIS. Retrieved from https://aisel.aisnet.org/amcis2017/PhilosophyIS

Even though the word classification appears in a number of publications in high ranking information systems (IS) journals, few discussions on the fundamental aspects regarding classification could be found. Most IS scholars intuitively embrace some classification approach as a fundamental activity in their research but without considering what classification entails. This paper reports on an investigation into classification, how classification is used within science and disciplines related to IS, as well as how it is approached within IS research itself. The main contribution of the paper is a proposed taxonomy of classification approaches (ToCA) that was validated by classifying classification approaches in relevant publications in three IS journals. ToCA provides a language for scholars to describe and comment, as well as understand the impact of the diverse adoption of classification approaches within IS research.

@{453,
  author = {Aurona Gerber and Richard Baskerville and Alta van der Merwe},
  title = {A Taxonomy of Classification Approaches in IS Research},
  abstract = {Even though the word classification appears in a number of publications in high ranking information systems (IS) journals, few discussions on the fundamental aspects regarding classification could be found. Most IS scholars intuitively embrace some classification approach as a fundamental activity in their research but without considering what classification entails. This paper reports on an investigation into classification, how classification is used within science and disciplines related to IS, as well as how it is approached within IS research itself. The main contribution of the paper is a proposed taxonomy of classification approaches (ToCA) that was validated by classifying classification approaches in relevant publications in three IS journals. ToCA provides a language for scholars to describe and comment, as well as understand the impact of the diverse adoption of classification approaches within IS research.},
  year = {2017},
  journal = {AMCIS},
  month = {10/08-12/08},
  isbn = {978-0-9966831-4-2},
  url = {https://aisel.aisnet.org/amcis2017/PhilosophyIS},
}
van der Merwe, A. ., Gerber, A. ., & Smuts, H. . (2017). Mapping a Design Science Research Cycle to the Postgraduate Research Report. Communications in Computer and Information Science, 730. http://doi.org/10.1007/978-3-319-69670-6_21

Design science research (DSR) is well-known in different domains, including information systems (IS), for the construction of artefacts. One of the most challenging aspects of IS postgraduate studies (with DSR) is determining the structure of the study and its report, which should reflect all the components necessary to build a convincing argument in support of such a study’s claims or assertions. Analysing several postgraduate IS-DSR reports as examples, this paper presents a mapping between recommendable structures for research reports and the DSR process model of Vaishnavi and Kuechler, which several of our current postgraduate students have found helpful.

@article{445,
  author = {Alta van der Merwe and Aurona Gerber and Hanlie Smuts},
  title = {Mapping a Design Science Research Cycle to the Postgraduate Research Report},
  abstract = {Design science research (DSR) is well-known in different domains, including information systems (IS), for the construction of artefacts. One of the most challenging aspects of IS postgraduate studies (with DSR) is determining the structure of the study and its report, which should reflect all the components necessary to build a convincing argument in support of such a study’s claims or assertions. Analysing several postgraduate IS-DSR reports as examples, this paper presents a mapping between recommendable structures for research reports and the DSR process model of Vaishnavi and Kuechler, which several of our current postgraduate students have found helpful.},
  year = {2017},
  journal = {Communications in Computer and Information Science},
  volume = {730},
  pages = {293-308},
  publisher = {Springer},
  address = {Cham},
  isbn = {978-3-319-69670-6},
  url = {https://link.springer.com/chapter/10.1007/978-3-319-69670-6_21},
  doi = {10.1007/978-3-319-69670-6_21},
}
Psillos, S. ., & Ruttkamp-Bloem, E. . (2017). Scientific realism: quo vadis? Introduction: new thinking about scientific realism. Synthese, 194(4). http://doi.org/10.1007/s11229-017-1493-x

This Introduction has two foci: the first is a discussion of the motivation for and the aims of the 2014 conference on New Thinking about Scientific Realism in Cape Town South Africa, and the second is a brief contextualization of the contributed articles in this special issue of Synthese in the framework of the conference. Each focus is discussed in a separate section.

@article{416,
  author = {Stathis Psillos and Emma Ruttkamp-Bloem},
  title = {Scientific realism: quo vadis? Introduction: new thinking about scientific realism},
  abstract = {This Introduction has two foci: the first is a discussion of the motivation for and the aims of the 2014 conference on New Thinking about Scientific Realism in Cape Town South Africa, and the second is a brief contextualization of the contributed articles in this special issue of Synthese in the framework of the conference. Each focus is discussed in a separate section.},
  year = {2017},
  journal = {Synthese},
  volume = {194},
  pages = {3187-3201},
  issue = {4},
  publisher = {Springer},
  isbn = {0039-7857 , 1573-0964},
  doi = {10.1007/s11229-017-1493-x},
}
Bell, L. ., Meyer, T. ., & Mouton, F. . (2017). Mobile On-board Vehicle Event Recorder: MOVER. In Information Communication Technology and Society Conference (ICTAS). http://doi.org/10.1109/ICTAS.2017.7920653

The rapid development of smart-phone technology in recent years has lead to many smart-phone owners owning out-of-date devices, equipped with useful technologies, which are no longer in use. These devices are valuable resources that can be harnessed to improve users’ lives. This project aims at leveraging these older, unused devices to help improve road safety, specifically through the improved response time of emergency services to accident locations. An Android application — Mobile On-board Vehicle Event Recorder (MOVER) — was designed and built for the purpose of detecting car accidents through the use of acceleration thresholds. Driving data was gathered and crash simulations were run. With this data, testing and analysis were conducted in order to determine an acceleration threshold that separates normal driving from accident situations as accurately as possible. With this application, users can leverage their previous or current mobile devices to improve road safety - for themselves, and their area as a whole. A promising level of accuracy was achieved, but significant improvements can be made to the application. Large opportunity for future work exists in the field, and hopefully through the development of this application, other researchers may be more inclined to investigate and test such future work.

@{358,
  author = {Luke Bell and Tommie Meyer and Francois Mouton},
  title = {Mobile On-board Vehicle Event Recorder: MOVER},
  abstract = {The rapid development of smart-phone technology in recent years has lead to many smart-phone owners owning out-of-date devices, equipped with useful technologies, which are no longer in use. These devices are valuable resources that can be harnessed to improve users’ lives. This project aims at leveraging these older, unused devices to help improve road safety, specifically through the improved response time of emergency services to accident locations. An Android application — Mobile On-board Vehicle Event Recorder (MOVER) — was designed and built for the purpose of detecting car accidents through the use of acceleration thresholds. Driving data was gathered and crash simulations were run. With this data, testing and analysis were conducted in order to determine an acceleration threshold that separates normal driving from accident situations as accurately as possible. With this application, users can leverage their previous or current mobile devices to improve road safety - for themselves, and their area as a whole. A promising level of accuracy was achieved, but significant improvements can be made to the application. Large opportunity for future work exists in the field, and hopefully through the development of this application, other researchers may be more inclined to investigate and test such future work.},
  year = {2017},
  journal = {Information Communication Technology and Society Conference (ICTAS)},
  month = {9/03 - 10/03},
  url = {https://www.researchgate.net/publication/316239845_Mobile_on-board_vehicle_event_recorder_MOVER},
  doi = {10.1109/ICTAS.2017.7920653},
}
Gerber, A. ., Morar, N. ., & Meyer, T. . (2017). Ontology-driven taxonomic work OWS for Afrotropical Bees. In TDWG Annual Conference. Retrieved from http://pubs.cs.uct.ac.za/id/eprint/1206

This poster presents the results of an investigation into the use of ontology technologies to support taxonomy functions. Taxonomy is the science of naming and grouping biological organisms into a hierarchy. A core function of biological taxonomy is the classification and revised classification of biological organisms into an agreed upon taxonomic structure based on sets of shared characteristics. Recent developments in knowledge representation within Computer Science include the establishment of computational ontologies. Such ontologies are particularly well suited to support classification functions such as those used in biological taxonomy. Using a specific genus of Afrotropical bees, this research project captured and represented the taxonomic knowledge base into an OWL2 ontology. In addition, the project used and extended available reasoning algorithms over the ontology to draw inferences that support the necessary taxonomy functions, and developed an application, the web ontology classifier (WOC). The WOC uses the Afrotropical bee ontology and demonstrates the taxonomic functions namely: identification (keys) as well as the description and comparison of taxa (taxonomic revision).

@{357,
  author = {Aurona Gerber and Nishal Morar and Tommie Meyer},
  title = {Ontology-driven taxonomic work OWS for Afrotropical Bees},
  abstract = {This poster presents the results of an investigation into the use of ontology technologies to support taxonomy functions. Taxonomy is the science of naming and grouping biological organisms into a hierarchy. A core function of biological taxonomy is the classification and revised classification of biological organisms into an agreed upon taxonomic structure based on sets of shared characteristics. Recent developments in knowledge representation within Computer Science include the establishment of computational ontologies. Such ontologies are particularly well suited to support classification functions such as those used in biological taxonomy. Using a specific genus of Afrotropical bees, this research project captured and represented the taxonomic knowledge base into an OWL2 ontology. In addition, the project used and extended available reasoning algorithms over the ontology to draw inferences that support the necessary taxonomy functions, and developed an application, the web ontology classifier (WOC). The WOC uses the Afrotropical bee ontology and demonstrates the taxonomic functions namely: identification (keys) as well as the description and comparison of taxa (taxonomic revision).},
  year = {2017},
  journal = {TDWG Annual Conference},
  month = {2/10 - 6/10},
  url = {http://pubs.cs.uct.ac.za/id/eprint/1206},
}
Van Niekerk, D. R., Van Heerden, C. J., Davel, M. H., Kleynhans, N. ., Kjartansson, O. ., Jansche, M. ., & Ha, L. . (2017). Rapid development of TTS corpora for four South African languages. In Interspeech. Stockholm, Sweden. http://doi.org/10.21437/Interspeech.2017-1139

This paper describes the development of text-to-speech corpora for four South African languages. The approach followed investigated the possibility of using low-cost methods including informal recording environments and untrained volunteer speakers. This objective and the additional future goal of expanding the corpus to increase coverage of South Africa’s 11 official languages necessitated experimenting with multi-speaker and code-switched data. The process and relevant observations are detailed throughout. The latest version of the corpora are available for download under an open-source licence and will likely see further development and refinement in future.

@{278,
  author = {Daniel Van Niekerk and Charl Van Heerden and Marelie Davel and Neil Kleynhans and Oddur Kjartansson and Martin Jansche and Linne Ha},
  title = {Rapid development of TTS corpora for four South African languages},
  abstract = {This paper describes the development of text-to-speech corpora for four South African languages. The approach followed investigated the possibility of using low-cost methods including informal recording environments and untrained volunteer speakers. This objective and the additional future goal of expanding the corpus to increase coverage of South Africa’s 11 official languages necessitated experimenting with multi-speaker and code-switched data. The process and relevant observations are detailed throughout. The latest version of the corpora are available for download under an open-source licence and will likely see further development and refinement in future.},
  year = {2017},
  journal = {Interspeech},
  pages = {2178-2182},
  address = {Stockholm, Sweden},
  doi = {10.21437/Interspeech.2017-1139},
}
  • CSIR
  • DSI
  • Covid-19