Research Publications
2019
With the development of ICT and the Internet there was barely any inclination that it could transform itself into a pervading revolution which could be misused for criminal activities. Cybercrime is increasing more rapidly than expected. IBM estimated in 2016 that by 2019 the global cost of cybercrime will reach $2 trillion, a threefold increase from the 2015 estimate of $500 billion. Organised crime is using cyber platforms in a much more sophisticated way that requires a highly skilled and specialised law enforcement response. Cryptocurrencies creates the opportunity for criminals to hide proceeds and the use of cryptocurrency mining malware is resulting in cybercriminals believing they are cashing in on unprecedented successes of these currencies. Regulatory has to be updated to effectively respond to unlawful activities relating to cybercrime. A holistic approach must be used by governments to develop a strategy and implementation plan to address the phenomenon of cybercrime for law enforcement. Currently most African countries address cybercrime in an uncoordinated and fragmented way. This paper presents a framework for African countries to develop and implement a national cybercrime strategy.
@{270, author = {J.C van Vuuren and Louise Leenen and P Pieterse}, title = {Framework for the development and implementation of a cybercrime strategy in Africa}, abstract = {With the development of ICT and the Internet there was barely any inclination that it could transform itself into a pervading revolution which could be misused for criminal activities. Cybercrime is increasing more rapidly than expected. IBM estimated in 2016 that by 2019 the global cost of cybercrime will reach $2 trillion, a threefold increase from the 2015 estimate of $500 billion. Organised crime is using cyber platforms in a much more sophisticated way that requires a highly skilled and specialised law enforcement response. Cryptocurrencies creates the opportunity for criminals to hide proceeds and the use of cryptocurrency mining malware is resulting in cybercriminals believing they are cashing in on unprecedented successes of these currencies. Regulatory has to be updated to effectively respond to unlawful activities relating to cybercrime. A holistic approach must be used by governments to develop a strategy and implementation plan to address the phenomenon of cybercrime for law enforcement. Currently most African countries address cybercrime in an uncoordinated and fragmented way. This paper presents a framework for African countries to develop and implement a national cybercrime strategy.}, year = {2019}, journal = {International Conference on Cyber Warfare and Security (ICCWS)}, month = {28/02 - 1/03}, address = {Stellenbosch}, }
The modern day workforce is more likely to be diverse and it is imperative for managers to be aware of the influence diversity has on leadership in their organisations. An effective leadership approach should take the diversity of a work team in terms of culture, age, gender, ethnicity and other factors into account. Although there are studies on the effect of national cultures on leadership and decision-making, many modern organisations employ an international workforce. This paper presents research on a methodology to build a decision model to support the selection of an appropriate leadership approach for a diverse team based on the composition of the team. The method to build such a decision model is based on Saaty’s well-known Analytic Hierarchy Process (AHP) (Saaty J. , 1990) for solving multi-criteria decision problems. AHP allows an optimal trade-off among the criteria based on the judgments of experts in the problem area. In this paper, AHP is extended to incorporate a diversity profile of the team into the decision problem. Although there are many studies on effective leadership styles, there is very limited research on the selection of an effective leadership style for a specific team. The focus of this research is on a methodology to construct a decision model for this problem and not on the social science of diversity and its influence on employees and leaders. An example is included to show how this model building methodology can be used in practice. The next phase of this research will be to populate and automate the model based on results from research on diversity and leadership.
@{269, author = {Louise Leenen and A van Heerden and Phelela Ngcingwana and L Masole}, title = {A Model to Select a Leadership Approach for a Diverse Team}, abstract = {The modern day workforce is more likely to be diverse and it is imperative for managers to be aware of the influence diversity has on leadership in their organisations. An effective leadership approach should take the diversity of a work team in terms of culture, age, gender, ethnicity and other factors into account. Although there are studies on the effect of national cultures on leadership and decision-making, many modern organisations employ an international workforce. This paper presents research on a methodology to build a decision model to support the selection of an appropriate leadership approach for a diverse team based on the composition of the team. The method to build such a decision model is based on Saaty’s well-known Analytic Hierarchy Process (AHP) (Saaty J. , 1990) for solving multi-criteria decision problems. AHP allows an optimal trade-off among the criteria based on the judgments of experts in the problem area. In this paper, AHP is extended to incorporate a diversity profile of the team into the decision problem. Although there are many studies on effective leadership styles, there is very limited research on the selection of an effective leadership style for a specific team. The focus of this research is on a methodology to construct a decision model for this problem and not on the social science of diversity and its influence on employees and leaders. An example is included to show how this model building methodology can be used in practice. The next phase of this research will be to populate and automate the model based on results from research on diversity and leadership.}, year = {2019}, journal = {European Conference on Business and Management Studies (ECRM)}, month = {21/06 - 22/06}, address = {Johannesburg}, doi = {doi:http://dx.doi.org/10.34190/RM.19.110}, }
Most military forces recognise the importance and the challenges of cyber as an operational domain. In addition to specialised cyber units, cyber is present in every division and arms of service as a result the military face increasing risks from cyber threats. It is thus crucial to establish and maintain a capability to ensure cybersecurity. Most organisations purchase and use technical controls to counter cyber threats, but users are considered the weakest link in maintaining cybersecurity, even if they are cyber aware. The cultivation of a cybersecurity culture has been shown to be the best approach to address human behaviour in the cyber domain. The development and fostering of an organisational cybersecurity culture is receiving increasing attention. This paper gives an overview of existing frameworks and guidelines in this regard and applies these approaches to the military environment. The military environment differs markedly from a business environment in terms of the nature of their work and traditional military culture. The paper proposes a framework for a military force to cultivate and foster a cybersecurity culture within the traditional military culture. This framework has to be tested in a military environment.
@{267, author = {Louise Leenen and J.C van Vuuren}, title = {Framework for the Cultivation of a Military Cybersecurity Culture}, abstract = {Most military forces recognise the importance and the challenges of cyber as an operational domain. In addition to specialised cyber units, cyber is present in every division and arms of service as a result the military face increasing risks from cyber threats. It is thus crucial to establish and maintain a capability to ensure cybersecurity. Most organisations purchase and use technical controls to counter cyber threats, but users are considered the weakest link in maintaining cybersecurity, even if they are cyber aware. The cultivation of a cybersecurity culture has been shown to be the best approach to address human behaviour in the cyber domain. The development and fostering of an organisational cybersecurity culture is receiving increasing attention. This paper gives an overview of existing frameworks and guidelines in this regard and applies these approaches to the military environment. The military environment differs markedly from a business environment in terms of the nature of their work and traditional military culture. The paper proposes a framework for a military force to cultivate and foster a cybersecurity culture within the traditional military culture. This framework has to be tested in a military environment.}, year = {2019}, journal = {International Conference on Cyber Warfare and Security (ICCWS)}, month = {28/02 - 1/03}, url = {https://www.researchgate.net/publication/336605506_Framework_for_the_Cultivation_of_a_Military_Cybersecurity_Culture}, }
The International Institute for Strategic Studies (2018: 6) states that “cyber capability should now be seen as a key aspect of some states’ coercive power ... This has driven some European states to re-examine their industrial, political, social and economic vulnerabilities, influence operations and information warfare, as well as more traditional areas of military power.” Cybersecurity is often incorrectly assumed to be a purely technical field, however there are numerous multidisciplinary aspects. The very nature of cybersecurity and operations in cyberspace is disruptive, and this is true for many disciplines attempting to introduce cybersecurity research into their offerings. This can provide challenges to researchers and students where methodologies that do not necessarily follow disciplinary norms are prejudiced against by old-school thought. Foundational understanding of concepts may also hinder multi-disciplinary research, as specific terminology that is used in cybersecurity may be considered colloquial or have different meanings in other disciplinary settings. The experimental, observational and mathematical research methodologies often employed by computer scientists do not address the political or legal aspects of cybersecurity research. Research methods for cybersecurity generally apply and teach the limited scientific methods for creating new knowledge, validating theories, and providing some critical insights into to the cybersecurity arena. This paper aims to investigate the South African national and institutional perspectives for higher education and research, identify challenges, and propose interventions to facilitate multidisciplinary research into cybersecurity and cyberwarfare in South Africa. Legislature and policies, organisational structures, processes, resources, and historical and socio-economic factors will be discussed as to the influence on cybersecurity research. A review and analysis of international efforts for multidisciplinary research in higher education institutions will provide for a basis to propose a framework for South African higher education institutions to effectively implement cybersecurity research.
@{266, author = {Trishana Ramluckan and Brett van Niekerk and Louise Leenen}, title = {Research Challenges for Cybersecurity and Cyberwarfare: A South African Perspective}, abstract = {The International Institute for Strategic Studies (2018: 6) states that “cyber capability should now be seen as a key aspect of some states’ coercive power ... This has driven some European states to re-examine their industrial, political, social and economic vulnerabilities, influence operations and information warfare, as well as more traditional areas of military power.” Cybersecurity is often incorrectly assumed to be a purely technical field, however there are numerous multidisciplinary aspects. The very nature of cybersecurity and operations in cyberspace is disruptive, and this is true for many disciplines attempting to introduce cybersecurity research into their offerings. This can provide challenges to researchers and students where methodologies that do not necessarily follow disciplinary norms are prejudiced against by old-school thought. Foundational understanding of concepts may also hinder multi-disciplinary research, as specific terminology that is used in cybersecurity may be considered colloquial or have different meanings in other disciplinary settings. The experimental, observational and mathematical research methodologies often employed by computer scientists do not address the political or legal aspects of cybersecurity research. Research methods for cybersecurity generally apply and teach the limited scientific methods for creating new knowledge, validating theories, and providing some critical insights into to the cybersecurity arena. This paper aims to investigate the South African national and institutional perspectives for higher education and research, identify challenges, and propose interventions to facilitate multidisciplinary research into cybersecurity and cyberwarfare in South Africa. Legislature and policies, organisational structures, processes, resources, and historical and socio-economic factors will be discussed as to the influence on cybersecurity research. A review and analysis of international efforts for multidisciplinary research in higher education institutions will provide for a basis to propose a framework for South African higher education institutions to effectively implement cybersecurity research.}, year = {2019}, journal = {European Conference on Cyber Warfare and Security (ECCWS)}, month = {04/07 - 05/07}, address = {Portugal}, url = {https://www.researchgate.net/publication/334327321_Research_Challenges_for_Cybersecurity_and_Cyberwarfare_A_South_African_Perspective}, }
A degenerate or indeterminate string on an alphabet SIGMA is a sequence of non-empty subsets of SIGMA . Given a degenerate string t of length n and its Burrows–Wheeler transform we present a new method for searching for a degenerate pattern of length m in t running in O ( mn ) time on a constant size alphabet SIGMA. Furthermore, it is a hybrid pattern matching technique that works on both regular and degenerate strings. A degenerate string is said to be conservative if its number of non-solid letters is upper-bounded by a fixed positive constant q; in this case we show that the search time complexity is O ( qm^2 ) for counting the number of occurrences and O ( qm^2 + occ ) for reporting the found occurrences where occ is the number of occurrences of the pattern in t. Experimental results show that our method performs well in practice.
@article{265, author = {J.W. Daykin and R. Groult and Y. Guesnet and T. Lecroq and A. Lefebvre and M. Leonard and L. Mouchard and E. Prieur-Gaston and Bruce Watson}, title = {Efficient pattern matching in degenerate strings with the Burrows–Wheeler transform}, abstract = {A degenerate or indeterminate string on an alphabet SIGMA is a sequence of non-empty subsets of SIGMA . Given a degenerate string t of length n and its Burrows–Wheeler transform we present a new method for searching for a degenerate pattern of length m in t running in O ( mn ) time on a constant size alphabet SIGMA. Furthermore, it is a hybrid pattern matching technique that works on both regular and degenerate strings. A degenerate string is said to be conservative if its number of non-solid letters is upper-bounded by a fixed positive constant q; in this case we show that the search time complexity is O ( qm^2 ) for counting the number of occurrences and O ( qm^2 + occ ) for reporting the found occurrences where occ is the number of occurrences of the pattern in t. Experimental results show that our method performs well in practice.}, year = {2019}, journal = {Information Processing Letters}, volume = {147}, pages = {82 - 87}, publisher = {Elsevier}, doi = {https://doi.org/10.1016/j.ipl.2019.03.003}, }
In many software applications, it is necessary to preserve confidentiality of information. Therefore, security mechanisms are needed to enforce that secret information does not leak to unauthorized users. However, most language-based techniques that enable in- formation flow control work post-hoc, deciding whether a specific program violates a confidentiality policy. In contrast, we proposed in previous work a refinement-based approach to derive programs that preserve confidentiality-by-construction. This approach follows the principles of Dijkstra’s correctness-by-construction. In this extended abstract, we present the implementation and tool support of that refinement-based approach allowing to specify the information flow policies first and to create programs in a simple while language which comply to these policies by construction. In particular, we present the idea of confidentiality-by-construction using an example and discuss the IDE C-CorC supporting this development approach.
@article{263, author = {T. Runge and I. Schaefer and A. Knuppel and L.G.W.A. Cleophas and D.G Kourie and Bruce Watson}, title = {Tool Support for Confidentiality-by-Construction}, abstract = {In many software applications, it is necessary to preserve confidentiality of information. Therefore, security mechanisms are needed to enforce that secret information does not leak to unauthorized users. However, most language-based techniques that enable in- formation flow control work post-hoc, deciding whether a specific program violates a confidentiality policy. In contrast, we proposed in previous work a refinement-based approach to derive programs that preserve confidentiality-by-construction. This approach follows the principles of Dijkstra’s correctness-by-construction. In this extended abstract, we present the implementation and tool support of that refinement-based approach allowing to specify the information flow policies first and to create programs in a simple while language which comply to these policies by construction. In particular, we present the idea of confidentiality-by-construction using an example and discuss the IDE C-CorC supporting this development approach.}, year = {2019}, journal = {Ada User Journal}, volume = {38}, pages = {64 - 68}, issue = {2}, doi = {https://doi.org/10.1145/3375408.3375413}, }
Correctness-by-Construction (CbC) is an approach to incrementally create formally correct programs guided by pre- and postcondition specifications. A program is created using refinement rules that guarantee the resulting implementation is correct with respect to the specification. Although CbC is supposed to lead to code with a low defect rate, it is not prevalent, especially because appropriate tool support is missing. To promote CbC, we provide tool support for CbC-based program development. We present CorC, a graphical and textual IDE to create programs in a simple while-language following the CbC approach. Starting with a specification, our open source tool supports CbC developers in refining a program by a sequence of refinement steps and in verifying the correctness of these refinement steps using the theorem prover KeY. We evaluated the tool with a set of standard examples on CbC where we reveal errors in the provided specification. The evaluation shows that our tool reduces the verification time in comparison to post-hoc verification.
@{262, author = {T. Runge and I. Schaefer and L.G.W.A. Cleophas and T. Thum and D.G Kourie and Bruce Watson}, title = {Tool Support for Correctness-by-Construction}, abstract = {Correctness-by-Construction (CbC) is an approach to incrementally create formally correct programs guided by pre- and postcondition specifications. A program is created using refinement rules that guarantee the resulting implementation is correct with respect to the specification. Although CbC is supposed to lead to code with a low defect rate, it is not prevalent, especially because appropriate tool support is missing. To promote CbC, we provide tool support for CbC-based program development. We present CorC, a graphical and textual IDE to create programs in a simple while-language following the CbC approach. Starting with a specification, our open source tool supports CbC developers in refining a program by a sequence of refinement steps and in verifying the correctness of these refinement steps using the theorem prover KeY. We evaluated the tool with a set of standard examples on CbC where we reveal errors in the provided specification. The evaluation shows that our tool reduces the verification time in comparison to post-hoc verification.}, year = {2019}, journal = {European Joint Conferences on Theory and Practice of Software (ETAPS)}, pages = {25 - 42}, month = {06/04 - 11/04}, publisher = {Springer}, address = {Switzerland}, isbn = {78-3-030-16721-9}, url = {https://link.springer.com/content/pdf/10.1007/978-3-030-16722-6.pdf}, doi = {https://doi.org/10.1007/978-3-030-16722-6 _ 2}, }
Information Systems (IS) as a discipline is still young and is continuously involved in building its own research knowledge base. Design Science Research (DSR) in IS is a research strategy for design that has emerged in the last 16 years. IS researchers are often lost when they start with a project in DSR, especially young researchers. We identified a need for a set of guidelines with supporting reference literature that can assist such novice adopters of DSR. We identified major themes relevant to DSR and proposed a set of six guidelines for the novice researcher supported with references summaries of seminal works from the IS DSR literature. We believe that someone new to the field can use these guidelines to prepare him/herself to embark on a DSR study.
@{261, author = {Alta van der Merwe and Aurona Gerber and Hanlie Smuts}, title = {Guidelines for Conducting Design Science Research in Information Systems}, abstract = {Information Systems (IS) as a discipline is still young and is continuously involved in building its own research knowledge base. Design Science Research (DSR) in IS is a research strategy for design that has emerged in the last 16 years. IS researchers are often lost when they start with a project in DSR, especially young researchers. We identified a need for a set of guidelines with supporting reference literature that can assist such novice adopters of DSR. We identified major themes relevant to DSR and proposed a set of six guidelines for the novice researcher supported with references summaries of seminal works from the IS DSR literature. We believe that someone new to the field can use these guidelines to prepare him/herself to embark on a DSR study.}, year = {2019}, journal = {SACLA}, month = {15/07 - 17/07}, publisher = {Springer}, isbn = {978-3-030-35628-6}, doi = {10.1007/978-3-030-35629-3_11}, }
Digital disruption is the phenomenon when established businesses succumb to new business models that exploit emerging technologies. Futurists often make dire predictions when discussing the impact of digital disruption, for instance that 40% of the Fortune 500 companies will disappear within the next decade. The digital disruption phenomenon was already studied two decades ago when Clayton Christensen developed a Theory of Disruptive Innovation, which is a popular theory for describing and explaining disruption due to technology developments that had occurred in the past. However it is still problematic to understand what is necessary to avoid disruption, especially within the context of a sustainable society in the 21st century. A key aspect we identified is the behavior of non-mainstream customers of an emerging technology, which is difficult to predict, especially when an organization is operating in an existing solution space. In this position paper we propose complementing the Theory of Disruptive Innovation with design thinking in order to identify the performance attributes that encourage the unpredictable and unforeseen customer behavior that is a cause for disruption. We employ case-based scenario analysis of higher education as evaluation mechanism for our extended disruptive innovation theory. Our position is that a better understanding of the implicit and unpredictable customer behavior that cause disruption due to additional performance attributes (using design thinking) could assist organizations to pre-empt digital disruption and adapt to support the additional functionality.
@{259, author = {Aurona Gerber and Machdel Matthee}, title = {Design Thinking for Pre-empting Digital Disruption}, abstract = {Digital disruption is the phenomenon when established businesses succumb to new business models that exploit emerging technologies. Futurists often make dire predictions when discussing the impact of digital disruption, for instance that 40% of the Fortune 500 companies will disappear within the next decade. The digital disruption phenomenon was already studied two decades ago when Clayton Christensen developed a Theory of Disruptive Innovation, which is a popular theory for describing and explaining disruption due to technology developments that had occurred in the past. However it is still problematic to understand what is necessary to avoid disruption, especially within the context of a sustainable society in the 21st century. A key aspect we identified is the behavior of non-mainstream customers of an emerging technology, which is difficult to predict, especially when an organization is operating in an existing solution space. In this position paper we propose complementing the Theory of Disruptive Innovation with design thinking in order to identify the performance attributes that encourage the unpredictable and unforeseen customer behavior that is a cause for disruption. We employ case-based scenario analysis of higher education as evaluation mechanism for our extended disruptive innovation theory. Our position is that a better understanding of the implicit and unpredictable customer behavior that cause disruption due to additional performance attributes (using design thinking) could assist organizations to pre-empt digital disruption and adapt to support the additional functionality.}, year = {2019}, journal = {Conference on e-Business, e-Services and e-Society}, pages = {759 - 770}, month = {18/09 - 20/09}, publisher = {Springer}, isbn = {978-3-030-29373-4}, doi = {10.1007/978-3-030-29374-1_62}, }
Advanced modeling is a challenging endeavor and good tool support is of paramount importance to ensure that the modeling objectives are met through the efficient execution of tasks. Tools for advanced modeling should not just support basic task modeling functionality such as easy-to-use interfaces for model creation, but also advanced task functionality such as consistency checks and analysis queries. Enterprise Architecture (EA) is concerned with the alignment of all aspects of an organization. Modeling plays a crucial role in EA and the matching of the correct tool to enable task execution is vital for enterprises engaged with EA. Enterprise Architecture Management (EAM) reflects recent trends that elevate EA toward a strategic management function within organizations. Tool support for EAM would necessarily include the execution of additional and often implicit advanced modeling tasks that support EAM capabilities. In this paper we report on a study that used the Task-Technology Fit (TTF) theory to investigate the extent to which basic and advanced task execution for EAM is supported by technology. We found that four of the six TTF factors fully supported and one partially supported EAM task execution. One factor was inconclusive. This study provided a insight into investigating tool support for EAM related task execution to achieve strategic EAM goals.
@inbook{258, author = {Sunet Eybers and Aurona Gerber and Dominik Bork and Dimitris Karagiannis}, title = {Matching Technology with Enterprise Architecture and Enterprise Architecture Management Tasks Using Task Technology Fit}, abstract = {Advanced modeling is a challenging endeavor and good tool support is of paramount importance to ensure that the modeling objectives are met through the efficient execution of tasks. Tools for advanced modeling should not just support basic task modeling functionality such as easy-to-use interfaces for model creation, but also advanced task functionality such as consistency checks and analysis queries. Enterprise Architecture (EA) is concerned with the alignment of all aspects of an organization. Modeling plays a crucial role in EA and the matching of the correct tool to enable task execution is vital for enterprises engaged with EA. Enterprise Architecture Management (EAM) reflects recent trends that elevate EA toward a strategic management function within organizations. Tool support for EAM would necessarily include the execution of additional and often implicit advanced modeling tasks that support EAM capabilities. In this paper we report on a study that used the Task-Technology Fit (TTF) theory to investigate the extent to which basic and advanced task execution for EAM is supported by technology. We found that four of the six TTF factors fully supported and one partially supported EAM task execution. One factor was inconclusive. This study provided a insight into investigating tool support for EAM related task execution to achieve strategic EAM goals.}, year = {2019}, journal = {Lecture Notes in Business Information Processing}, pages = {245 - 260}, publisher = {Springer}, isbn = {978-3-030-20617-8}, doi = {10.1007/978-3-030-20618-5_17}, }
Visual languages make use of spatial arrangements of graphical and textual elements to represent information. Domain specific diagrams, including flowcharts and music sheets, are examples of visual languages. An established area of research is the study of languages which can be used to create declarative specifications of visual languages. In this paper, the result of a review of research on visual language specification languages is presented. Specifically, a structured literature review is conducted to establish research themes by analysing what has been studied in the context of specification languages. The result of the literature review is used to develop a conceptual framework that consists of six research themes with related topics. Additionally, discussions on how the conceptual framework can be used as a basis to guide research in the field of specification languages, to perform feature based characterisations and to create lists of criteria to evaluate and compare specification languages are included in this paper.
@{255, author = {Anitta Thomas and Aurona Gerber and Alta van der Merwe}, title = {A Conceptual Framework of Research on Visual Language Specification Languages}, abstract = {Visual languages make use of spatial arrangements of graphical and textual elements to represent information. Domain specific diagrams, including flowcharts and music sheets, are examples of visual languages. An established area of research is the study of languages which can be used to create declarative specifications of visual languages. In this paper, the result of a review of research on visual language specification languages is presented. Specifically, a structured literature review is conducted to establish research themes by analysing what has been studied in the context of specification languages. The result of the literature review is used to develop a conceptual framework that consists of six research themes with related topics. Additionally, discussions on how the conceptual framework can be used as a basis to guide research in the field of specification languages, to perform feature based characterisations and to create lists of criteria to evaluate and compare specification languages are included in this paper.}, year = {2019}, journal = {International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD)}, month = {05/09 - 06/09}, publisher = {IEEE}, address = {Winterton, South Africa}, isbn = {978-1-5386-9236-3}, url = {https://ieeexplore.ieee.org/document/8851003}, doi = {10.1109/ICABCD.2019.8851003}, }
Many posterior distributions take intractable forms and thus require variational inference where analytical solutions cannot be found. Variational Inference and Monte Carlo Markov Chains (MCMC) are established mechanism to approximate these intractable values. An alternative approach to sampling and optimisation for approximation is a direct mapping between the data and posterior distribution. This is made possible by recent advances in deep learning methods. Latent Dirichlet Allocation (LDA) is a model which offers an intractable posterior of this nature. In LDA latent topics are learnt over unlabelled documents to soft cluster the documents. This paper assesses the viability of learning latent topics leveraging an autoencoder (in the form of Autoencoding variational Bayes) and compares the mimicked posterior distributions to that achieved by VI. After conducting various experiments the proposed AEVB delivers inadequate performance. Under Utopian conditions comparable conclusion are achieved which are generally unattainable. Further, model specification becomes increasingly complex and deeply circumstantially dependant - which is in itself not a deterrent but does warrant consideration. In a recent study, these concerns were highlighted and discussed theoretically. We confirm the argument empirically by dissecting the autoencoder’s iterative process. In investigating the autoencoder, we see performance degrade as models grow in dimensionality. Visualization of the autoencoder reveals a bias towards the initial randomised topics.
@{254, author = {Zach Wolpe and Alta de Waal}, title = {Autoencoding variational Bayes for latent Dirichlet allocation}, abstract = {Many posterior distributions take intractable forms and thus require variational inference where analytical solutions cannot be found. Variational Inference and Monte Carlo Markov Chains (MCMC) are established mechanism to approximate these intractable values. An alternative approach to sampling and optimisation for approximation is a direct mapping between the data and posterior distribution. This is made possible by recent advances in deep learning methods. Latent Dirichlet Allocation (LDA) is a model which offers an intractable posterior of this nature. In LDA latent topics are learnt over unlabelled documents to soft cluster the documents. This paper assesses the viability of learning latent topics leveraging an autoencoder (in the form of Autoencoding variational Bayes) and compares the mimicked posterior distributions to that achieved by VI. After conducting various experiments the proposed AEVB delivers inadequate performance. Under Utopian conditions comparable conclusion are achieved which are generally unattainable. Further, model specification becomes increasingly complex and deeply circumstantially dependant - which is in itself not a deterrent but does warrant consideration. In a recent study, these concerns were highlighted and discussed theoretically. We confirm the argument empirically by dissecting the autoencoder’s iterative process. In investigating the autoencoder, we see performance degrade as models grow in dimensionality. Visualization of the autoencoder reveals a bias towards the initial randomised topics.}, year = {2019}, journal = {Proceedings of the South African Forum for Artificial Intelligence Research}, pages = {25-36}, month = {12/09}, publisher = {CEUR Workshop Proceedings}, isbn = {1613-0073}, url = {http://ceur-ws.org/Vol-2540/FAIR2019_paper_33.pdf}, }
Environmental information is acquired and assessed during the environmental impact assessment process for surface‐strip coal mine approval. However, integrating these data and quantifying rehabilitation risk using a holistic multidisciplinary approach is seldom undertaken. We present a rehabilitation risk assessment integrated network (R2AIN™) framework that can be applied using Bayesian networks (BNs) to integrate and quantify such rehabilitation risks. Our framework has 7 steps, including key integration of rehabilitation risk sources and the quantification of undesired rehabilitation risk events to the final application of mitigation. We demonstrate the framework using a soil compaction BN case study in the Witbank Coalfield, South Africa and the Bowen Basin, Australia. Our approach allows for a probabilistic assessment of rehabilitation risk associated with multidisciplines to be integrated and quantified. Using this method, a site's rehabilitation risk profile can be determined before mining activities commence and the effects of manipulating management actions during later mine phases to reduce risk can be gauged, to aid decision making
@article{253, author = {Vanessa Weyer and Alta de Waal and Alex Lechner and Corinne Unger and Tim O'Connor and Thomas Baumgartl and Roland Schulze and Wayne Truter}, title = {Quantifying rehabilitation risks for surface‐strip coal mines using a soil compaction Bayesian network in South Africa and Australia: To demonstrate the R2AIN Framework}, abstract = {Environmental information is acquired and assessed during the environmental impact assessment process for surface‐strip coal mine approval. However, integrating these data and quantifying rehabilitation risk using a holistic multidisciplinary approach is seldom undertaken. We present a rehabilitation risk assessment integrated network (R2AIN™) framework that can be applied using Bayesian networks (BNs) to integrate and quantify such rehabilitation risks. Our framework has 7 steps, including key integration of rehabilitation risk sources and the quantification of undesired rehabilitation risk events to the final application of mitigation. We demonstrate the framework using a soil compaction BN case study in the Witbank Coalfield, South Africa and the Bowen Basin, Australia. Our approach allows for a probabilistic assessment of rehabilitation risk associated with multidisciplines to be integrated and quantified. Using this method, a site's rehabilitation risk profile can be determined before mining activities commence and the effects of manipulating management actions during later mine phases to reduce risk can be gauged, to aid decision making}, year = {2019}, journal = {Integrated Environmental Assessment and Management}, volume = {15}, pages = {190-208}, issue = {2}, publisher = {Wiley Online}, doi = {10.1002/ieam.4128}, }
This work compares techniques for clustering metered residential energy consumption data to construct representative daily load profiles in South Africa. The input data captures a population with high variability across temporal, geographic, social and economic dimensions. Different algorithms, normalisation and pre-binning techniques are evaluated to determine their effect on producing a good clustering structure. A Combined Index is developed as a relative score to ease the comparison of experiments across different metrics. The study shows that normalisation, specifically unit norm and the zero-one scaler, produce the best clusters. Pre-binning appears to improve clustering structures as a whole, but its effect on individual experiments remains unclear. Like several previous studies, the k-means algorithm produces the best results. To our knowledge this is the first work that rigorously compares state of the art cluster analysis techniques in the residential energy domain in a developing country context.
@{249, author = {Wiebke Toussaint and Deshen Moodley}, title = {Comparison of clustering techniques for residential load profiles in South Africa}, abstract = {This work compares techniques for clustering metered residential energy consumption data to construct representative daily load profiles in South Africa. The input data captures a population with high variability across temporal, geographic, social and economic dimensions. Different algorithms, normalisation and pre-binning techniques are evaluated to determine their effect on producing a good clustering structure. A Combined Index is developed as a relative score to ease the comparison of experiments across different metrics. The study shows that normalisation, specifically unit norm and the zero-one scaler, produce the best clusters. Pre-binning appears to improve clustering structures as a whole, but its effect on individual experiments remains unclear. Like several previous studies, the k-means algorithm produces the best results. To our knowledge this is the first work that rigorously compares state of the art cluster analysis techniques in the residential energy domain in a developing country context.}, year = {2019}, journal = {Forum for Artificial Intelligence Research}, pages = {117 -132}, month = {03/12 - 06/12}, publisher = {CEUR}, isbn = {1613-0073}, url = {http://ceur-ws.org/Vol-2540/FAIR2019_paper_55.pdf}, }
In recent work, we addressed an important limitation in previous ex- tensions of description logics to represent defeasible knowledge, namely the re- striction in the semantics of defeasible concept inclusion to a single preference or- der on objects of the domain. Syntactically, this limitation translates to a context- agnostic notion of defeasible subsumption, which is quite restrictive when it comes to modelling different nuances of defeasibility. Our point of departure in our recent proposal allows for different orderings on the interpretation of roles. This yields a notion of contextual defeasible subsumption, where the context is informed by a role. In the present paper, we extend this work to also provide a proof-theoretic counterpart and associated results. We define a (naïve) tableau- based algorithm for checking preferential consistency of contextual defeasible knowledge bases, a central piece in the definition of other forms of contextual defeasible reasoning over ontologies, notably contextual rational closure.
@{247, author = {Katarina Britz and Ivan Varzinczak}, title = {Preferential tableaux for contextual defeasible ALC}, abstract = {In recent work, we addressed an important limitation in previous ex- tensions of description logics to represent defeasible knowledge, namely the re- striction in the semantics of defeasible concept inclusion to a single preference or- der on objects of the domain. Syntactically, this limitation translates to a context- agnostic notion of defeasible subsumption, which is quite restrictive when it comes to modelling different nuances of defeasibility. Our point of departure in our recent proposal allows for different orderings on the interpretation of roles. This yields a notion of contextual defeasible subsumption, where the context is informed by a role. In the present paper, we extend this work to also provide a proof-theoretic counterpart and associated results. We define a (naïve) tableau- based algorithm for checking preferential consistency of contextual defeasible knowledge bases, a central piece in the definition of other forms of contextual defeasible reasoning over ontologies, notably contextual rational closure.}, year = {2019}, journal = {28th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods (TABLEAUX)}, pages = {39-57}, month = {03/09-05/09}, publisher = {Springer LNAI no. 11714}, isbn = {ISBN 978-3-030-29026-9}, url = {https://www.springer.com/gp/book/9783030290252}, }
Description logics have been extended in a number of ways to support defeasible reason- ing in the KLM tradition. Such features include preferential or rational defeasible concept inclusion, and defeasible roles in complex concept descriptions. Semantically, defeasible subsumption is obtained by means of a preference order on objects, while defeasible roles are obtained by adding a preference order to role interpretations. In this paper, we address an important limitation in defeasible extensions of description logics, namely the restriction in the semantics of defeasible concept inclusion to a single preference order on objects. We do this by inducing a modular preference order on objects from each modular preference order on roles, and using these to relativise defeasible subsumption. This yields a notion of contextualised rational defeasible subsumption, with contexts described by roles. We also provide a semantic construction for rational closure and a method for its computation, and present a correspondence result between the two.
@article{246, author = {Katarina Britz and Ivan Varzinczak}, title = {Contextual rational closure for defeasible ALC}, abstract = {Description logics have been extended in a number of ways to support defeasible reason- ing in the KLM tradition. Such features include preferential or rational defeasible concept inclusion, and defeasible roles in complex concept descriptions. Semantically, defeasible subsumption is obtained by means of a preference order on objects, while defeasible roles are obtained by adding a preference order to role interpretations. In this paper, we address an important limitation in defeasible extensions of description logics, namely the restriction in the semantics of defeasible concept inclusion to a single preference order on objects. We do this by inducing a modular preference order on objects from each modular preference order on roles, and using these to relativise defeasible subsumption. This yields a notion of contextualised rational defeasible subsumption, with contexts described by roles. We also provide a semantic construction for rational closure and a method for its computation, and present a correspondence result between the two.}, year = {2019}, journal = {Annals of Mathematics and Artificial Intelligence}, volume = {87}, pages = {83-108}, issue = {1-2}, isbn = {ISSN: 1012-2443}, url = {https://link.springer.com/article/10.1007/s10472-019-09658-2}, doi = {10.1007/s10472-019-09658-2}, }
A dynamic Bayesian decision network was developed to model the preharvest burning decision-making processes of sugarcane growers in a KwaZulu-Natal sugarcane supply chain and extends previous work by Price et al. (2018). This model was created using an iterative development approach. This paper recounts the development and validation process of the third version of the model. The model was validated using Pitchforth and Mengersen (2013)’s framework for validating expert elicited Bayesian networks. During this process, growers and cane supply members assessed the model in a focus group by executing the model, and reviewing the results of a prerun scenario. The participants were generally positive about how the model represented their decision-making processes. However, they identified some issues that could be addressed in the next iteration. Dynamic Bayesian decision networks offer a promising approach to modelling adaptive decisions in uncertain conditions. This model can be used to simulate the cognitive mechanism for a grower agent in a simulation of a sugarcane supply chain.
@{244, author = {C. Sue Price and Deshen Moodley and Anban Pillay}, title = {Modelling uncertain adaptive decisions: Application to KwaZulu-Natal sugarcane growers}, abstract = {A dynamic Bayesian decision network was developed to model the preharvest burning decision-making processes of sugarcane growers in a KwaZulu-Natal sugarcane supply chain and extends previous work by Price et al. (2018). This model was created using an iterative development approach. This paper recounts the development and validation process of the third version of the model. The model was validated using Pitchforth and Mengersen (2013)’s framework for validating expert elicited Bayesian networks. During this process, growers and cane supply members assessed the model in a focus group by executing the model, and reviewing the results of a prerun scenario. The participants were generally positive about how the model represented their decision-making processes. However, they identified some issues that could be addressed in the next iteration. Dynamic Bayesian decision networks offer a promising approach to modelling adaptive decisions in uncertain conditions. This model can be used to simulate the cognitive mechanism for a grower agent in a simulation of a sugarcane supply chain.}, year = {2019}, journal = {Forum for Artificial Intelligence Research (FAIR2019)}, pages = {145-160}, month = {4/12-6/12}, publisher = {CEUR}, address = {Cape Town}, url = {http://ceur-ws.org/Vol-2540/FAIR2019_paper_53.pdf}, }
The Cold-Start problem refers to the initial sparsity of data available to Recommender Systems that leads to poor recommendations to users. This research compares a Deep Learning Approach, a Deep Learning Approach that makes use of social information and Matrix Factorization. The social information was used to form communities of users. The intuition behind this approach is that users within a given community are likely to have similar interests. A community detection algorithm was used to group users. Thereafter a deep learning model was trained on each community. The comparative models were evaluated on the Yelp Round 9 Academic Dataset. The dataset was pruned to consist only of users with at least 1 social link. The evaluation metrics used were Mean Squared Error (MSE) and Mean Absolute Error (MAE). The evaluation was carried out using 5-fold cross-validation. The results showed that the use of social information improved on the results achieved from the Deep Learning Approach, and grouping users into communities was advantageous. However, the Deep Learning Approach that made use of social information did not outperform SVD++, a state of the art approach for recommender systems. However, the new approach shows promise for improving Deep Learning models.
@{243, author = {Muhammad Ikram and Anban Pillay and Edgar Jembere}, title = {Using social networks to enhance a deep learning approach to solve the cold-start problem in recommender systems}, abstract = {The Cold-Start problem refers to the initial sparsity of data available to Recommender Systems that leads to poor recommendations to users. This research compares a Deep Learning Approach, a Deep Learning Approach that makes use of social information and Matrix Factorization. The social information was used to form communities of users. The intuition behind this approach is that users within a given community are likely to have similar interests. A community detection algorithm was used to group users. Thereafter a deep learning model was trained on each community. The comparative models were evaluated on the Yelp Round 9 Academic Dataset. The dataset was pruned to consist only of users with at least 1 social link. The evaluation metrics used were Mean Squared Error (MSE) and Mean Absolute Error (MAE). The evaluation was carried out using 5-fold cross-validation. The results showed that the use of social information improved on the results achieved from the Deep Learning Approach, and grouping users into communities was advantageous. However, the Deep Learning Approach that made use of social information did not outperform SVD++, a state of the art approach for recommender systems. However, the new approach shows promise for improving Deep Learning models.}, year = {2019}, journal = {Forum for Artificial Intelligence Research (FAIR2019)}, pages = {173-184}, month = {4/12-6/12}, publisher = {CEUR}, address = {Cape Town}, url = {http://ceur-ws.org/Vol-2540/FAIR2019_paper_51.pdf}, }
Training agents in hard exploration, sparse reward environments is a difficult task since the reward feedback is insufficient for meaningful learning. In this work, we propose a new technique, called Directed Curiosity, that is a hybrid of Curiosity-Driven Exploration and distancebased reward shaping. The technique is evaluated in a custom navigation task where an agent tries to learn the shortest path to a distant target, in environments of varying difficulty. The technique is compared to agents trained with only a shaped reward signal, a curiosity signal as well as a sparse reward signal. It is shown that directed curiosity is the most successful in hard exploration environments, with the benefits of the approach being highlighted in environments with numerous obstacles and decision points. The limitations of the shaped reward function are also discussed.
@{242, author = {Asad Jeewa and Anban Pillay and Edgar Jembere}, title = {Directed curiosity-driven exploration in hard exploration, sparse reward environments}, abstract = {Training agents in hard exploration, sparse reward environments is a difficult task since the reward feedback is insufficient for meaningful learning. In this work, we propose a new technique, called Directed Curiosity, that is a hybrid of Curiosity-Driven Exploration and distancebased reward shaping. The technique is evaluated in a custom navigation task where an agent tries to learn the shortest path to a distant target, in environments of varying difficulty. The technique is compared to agents trained with only a shaped reward signal, a curiosity signal as well as a sparse reward signal. It is shown that directed curiosity is the most successful in hard exploration environments, with the benefits of the approach being highlighted in environments with numerous obstacles and decision points. The limitations of the shaped reward function are also discussed.}, year = {2019}, journal = {Forum for Artificial Intelligence Research (FAIR)}, pages = {12 -24}, month = {4/12-6/12}, publisher = {CEUR}, address = {Cape Town}, url = {http://ceur-ws.org/Vol-2540/FAIR2019_paper_42.pdf}, }
In this paper we present an approach to defeasible reasoning for the description logic ALC. The results discussed here are based on work done by Kraus, Lehmann and Magidor (KLM) on defeasible conditionals in the propositional case. We consider versions of a preferential semantics for two forms of defeasible subsumption, and link these semantic constructions formally to KLM-style syntactic properties via representation results. In addition to showing that the semantics is appropriate, these results pave the way for more effective decision procedures for defeasible reasoning in description logics. With the semantics of the defeasible version of ALC in place, we turn to the investigation of an appropriate form of defeasible entailment for this enriched version of ALC. This investigation includes an algorithm for the computation of a form of defeasible entailment known as rational closure in the propositional case. Importantly, the algorithm relies completely on classical entailment checks and shows that the computational complexity of reasoning over defeasible ontologies is no worse than that of the underlying classical ALC. Before concluding, we take a brief tour of some existing work on defeasible extensions of ALC that go beyond defeasible subsumption.
@inbook{240, author = {Katarina Britz and Giovanni Casini and Tommie Meyer and Ivan Varzinczak}, title = {A KLM Perspective on Defeasible Reasoning for Description Logics}, abstract = {In this paper we present an approach to defeasible reasoning for the description logic ALC. The results discussed here are based on work done by Kraus, Lehmann and Magidor (KLM) on defeasible conditionals in the propositional case. We consider versions of a preferential semantics for two forms of defeasible subsumption, and link these semantic constructions formally to KLM-style syntactic properties via representation results. In addition to showing that the semantics is appropriate, these results pave the way for more effective decision procedures for defeasible reasoning in description logics. With the semantics of the defeasible version of ALC in place, we turn to the investigation of an appropriate form of defeasible entailment for this enriched version of ALC. This investigation includes an algorithm for the computation of a form of defeasible entailment known as rational closure in the propositional case. Importantly, the algorithm relies completely on classical entailment checks and shows that the computational complexity of reasoning over defeasible ontologies is no worse than that of the underlying classical ALC. Before concluding, we take a brief tour of some existing work on defeasible extensions of ALC that go beyond defeasible subsumption.}, year = {2019}, journal = {Description Logic, Theory Combination, and All That}, pages = {147–173}, publisher = {Springer}, address = {Switzerland}, isbn = {978-3-030-22101-0}, url = {https://link.springer.com/book/10.1007%2F978-3-030-22102-7}, doi = {https://doi.org/10.1007/978-3-030-22102-7 _ 7}, }
We present a systematic approach for extending the KLM framework for defeasible entailment. We first present a class of basic defeasible entailment relations, characterise it in three distinct ways and provide a high-level algorithm for computing it. This framework is then refined, with the refined version being characterised in a similar manner. We show that the two well-known forms of defeasible entailment, rational closure and lexicographic closure, fall within our refined framework, that rational closure is the most conservative of the defeasible entailment relations within the framework (with respect to subset inclusion), but that there are forms of defeasible entailment within our framework that are more “adventurous” than lexicographic closure.
@{238, author = {Giovanni Casini and Tommie Meyer and Ivan Varzinczak}, title = {Taking Defeasible Entailment Beyond Rational Closure}, abstract = {We present a systematic approach for extending the KLM framework for defeasible entailment. We first present a class of basic defeasible entailment relations, characterise it in three distinct ways and provide a high-level algorithm for computing it. This framework is then refined, with the refined version being characterised in a similar manner. We show that the two well-known forms of defeasible entailment, rational closure and lexicographic closure, fall within our refined framework, that rational closure is the most conservative of the defeasible entailment relations within the framework (with respect to subset inclusion), but that there are forms of defeasible entailment within our framework that are more “adventurous” than lexicographic closure.}, year = {2019}, journal = {European Conference on Logics in Artificial Intelligence}, pages = {182 - 197}, month = {07/05 - 11/05}, publisher = {Springer}, address = {Switzerland}, isbn = {978-3-030-19569-4}, url = {https://link.springer.com/chapter/10.1007%2F978-3-030-19570-0_12}, doi = {https://doi.org/10.1007/978-3-030-19570-0 _ 12}, }
Description logics (DLs) are well-known knowledge representation formalisms focused on the representation of terminological knowledge. A probabilistic extension of a light-weight DL was recently proposed for dealing with certain knowledge occurring in uncertain contexts. In this paper, we continue that line of research by introducing the Bayesian extension BALC of the DL ALC. We present a tableau based procedure for deciding consistency, and adapt it to solve other probabilistic, contextual, and general inferences in this logic. We also show that all these problems remain ExpTime-complete, the same as reasoning in the underlying classical ALC.
@{237, author = {Leonard Botha and Tommie Meyer and Rafael Peñaloza}, title = {A Bayesian Extension of the Description Logic ALC}, abstract = {Description logics (DLs) are well-known knowledge representation formalisms focused on the representation of terminological knowledge. A probabilistic extension of a light-weight DL was recently proposed for dealing with certain knowledge occurring in uncertain contexts. In this paper, we continue that line of research by introducing the Bayesian extension BALC of the DL ALC. We present a tableau based procedure for deciding consistency, and adapt it to solve other probabilistic, contextual, and general inferences in this logic. We also show that all these problems remain ExpTime-complete, the same as reasoning in the underlying classical ALC.}, year = {2019}, journal = {European Conference on Logics in Artificial Intelligence}, pages = {339 - 354}, month = {07/05 - 11/05}, publisher = {Springer}, address = {Switzerland}, isbn = {978-3-030-19569-4}, url = {https://link.springer.com/chapter/10.1007%2F978-3-030-19570-0_22}, doi = {https://doi.org/10.1007/978-3-030-19570-0 _ 22}, }
The understanding of generalization in machine learning is in a state of flux. This is partly due to the elatively recent revelation that deep learning models are able to completely memorize training data and still perform appropriately on out-of-sample data, thereby contradicting long-held intuitions about generalization. The phenomenon was brought to light and discussed in a seminal paper by Zhang et al. [24]. We expand upon this work by discussing local attributes of neural network training within the context of a relatively simple and generalizable framework. We describe how various types of noise can be compensated for within the proposed framework in order to allow the global deep learning model to generalize in spite of interpolating spurious function descriptors. Empirically, we support our postulates with experiments involving overparameterized multilayer perceptrons and controlled noise in the training data. The main insights are that deep learning models are optimized for training data modularly, with different regions in the function space dedicated to fitting distinct kinds of sample information. Detrimental overfitting is largely prevented by the fact that different regions in the function space are used for prediction based on the similarity between new input data and that which has been optimized for.
@{284, author = {Marthinus Theunissen and Marelie Davel and Etienne Barnard}, title = {Insights regarding overfitting on noise in deep learning}, abstract = {The understanding of generalization in machine learning is in a state of flux. This is partly due to the elatively recent revelation that deep learning models are able to completely memorize training data and still perform appropriately on out-of-sample data, thereby contradicting long-held intuitions about generalization. The phenomenon was brought to light and discussed in a seminal paper by Zhang et al. [24]. We expand upon this work by discussing local attributes of neural network training within the context of a relatively simple and generalizable framework. We describe how various types of noise can be compensated for within the proposed framework in order to allow the global deep learning model to generalize in spite of interpolating spurious function descriptors. Empirically, we support our postulates with experiments involving overparameterized multilayer perceptrons and controlled noise in the training data. The main insights are that deep learning models are optimized for training data modularly, with different regions in the function space dedicated to fitting distinct kinds of sample information. Detrimental overfitting is largely prevented by the fact that different regions in the function space are used for prediction based on the similarity between new input data and that which has been optimized for.}, year = {2019}, journal = {South African Forum for Artificial Intelligence Research (FAIR)}, pages = {49-63}, address = {Cape Town, South Africa}, }
The generalization capabilities of deep neural networks are not well understood, and in particular, the influence of activation functions on generalization has received little theoretical attention. Phenomena such as vanishing gradients, node saturation and network sparsity have been identified as possible factors when comparing different activation functions [1]. We investigate these factors using fully connected feedforward networks on two standard benchmark problems, and find that the most salient differences between networks with sigmoidal and ReLU activations relate to the way that class-distinctive information is propagated through a network.
@{279, author = {Arnold Pretorius and Etienne Barnard and Marelie Davel}, title = {ReLU and sigmoidal activation functions}, abstract = {The generalization capabilities of deep neural networks are not well understood, and in particular, the influence of activation functions on generalization has received little theoretical attention. Phenomena such as vanishing gradients, node saturation and network sparsity have been identified as possible factors when comparing different activation functions [1]. We investigate these factors using fully connected feedforward networks on two standard benchmark problems, and find that the most salient differences between networks with sigmoidal and ReLU activations relate to the way that class-distinctive information is propagated through a network.}, year = {2019}, journal = {South African Forum for Artificial Intelligence Research (FAIR)}, pages = {37-48}, month = {04/12-07/12}, publisher = {CEUR Workshop Proceedings}, address = {Cape Town, South Africa}, }