Research Publications
2020
Enterprise Architecture (EA) has had an interesting and often controversial history since its inception in the late 80’s by pioneers such as John Zachman. Zachman proposed the Zachman Framework for Enterprise Architecture (ZFEA), a descriptive, holistic representation of an enterprise for the purposes of providing insights and understanding. Some scholars claim that EA is an imperative to ensure successful business structures or business-IT alignment, or more recently with Enterprise Architecture Management (EAM), to manage required organizational transformation. However, EA initiatives within companies are often costly and the expected return on investment is not realized. In fact, Gartner recently indicated in their 2018 Enterprise Architecture Hype Cycle that EA is slowly emerging from the trough of disillusionment after nearly a decade. In this paper we argue that the role and value of EA is often misunderstood, and that EA, specifically the ZFEA for the purpose of this paper, could be considered as a theory given the view of theory within Information Systems (IS). The purpose of IS theories is to analyse, predict, explain and/or prescribe and it could be argued that EA often conform to these purposes. Using the taxonomy of theories as well as the structural components of theory within IS as proposed by Gregor, we motivate that the ZFEA could be regarded as an explanatory theory. Positioning ZFEA as IS explanatory theory provides insight into the role and purpose of the ZFEA (and by extension EA), and could assist researchers and practitioners with mediating the challenges experienced when instituting EA and EAM initiatives within organizations.
@article{441, author = {Aurona Gerber and Pierre le Roux and Carike Kearney and Alta van der Merwe}, title = {The Zachman Framework for Enterprise Architecture: An Explanatory IS Theory}, abstract = {Enterprise Architecture (EA) has had an interesting and often controversial history since its inception in the late 80’s by pioneers such as John Zachman. Zachman proposed the Zachman Framework for Enterprise Architecture (ZFEA), a descriptive, holistic representation of an enterprise for the purposes of providing insights and understanding. Some scholars claim that EA is an imperative to ensure successful business structures or business-IT alignment, or more recently with Enterprise Architecture Management (EAM), to manage required organizational transformation. However, EA initiatives within companies are often costly and the expected return on investment is not realized. In fact, Gartner recently indicated in their 2018 Enterprise Architecture Hype Cycle that EA is slowly emerging from the trough of disillusionment after nearly a decade. In this paper we argue that the role and value of EA is often misunderstood, and that EA, specifically the ZFEA for the purpose of this paper, could be considered as a theory given the view of theory within Information Systems (IS). The purpose of IS theories is to analyse, predict, explain and/or prescribe and it could be argued that EA often conform to these purposes. Using the taxonomy of theories as well as the structural components of theory within IS as proposed by Gregor, we motivate that the ZFEA could be regarded as an explanatory theory. Positioning ZFEA as IS explanatory theory provides insight into the role and purpose of the ZFEA (and by extension EA), and could assist researchers and practitioners with mediating the challenges experienced when instituting EA and EAM initiatives within organizations.}, year = {2020}, journal = {Lecture Notes in Computer Science}, volume = {12066}, pages = {383-396}, publisher = {Springer}, address = {Cham}, isbn = {978-3-030-44999-5}, url = {https://link.springer.com/chapter/10.1007/978-3-030-44999-5_32}, doi = {10.1007/978-3-030-44999-5_32}, }
Understanding and explaining small- and medium-sized enterprise (SME) growth is important for sustainability from multiple perspectives. Research indicates that SMEs comprise more than 80% of most economies, and their cumulative impact on sustainability considerations is far from trivial. In addition, for sustainability concerns to be prioritized, an SME has to be successful over time. In most developing countries, SMEs play a major role in solving socio-economic challenges. SMEs are an active research topic within the information systems (IS) discipline, often within the enterprise architecture (EA) domain. EA fundamentally adopts a systems perspective to describe the essential elements of a socio-technical organization and their relationships to each other and to the environment in order to understand complexity and manage change. However, despite rapid adoption originally, EA research and practice often fails to deliver on expectations. In some circles, EA became synonymous with projects that are over-budget, over-time and costly without the expected return on investment. In this paper, we argue that EA remains indispensable for understanding and explaining enterprises and that we fundamentally need to revisit some of the applications of EA. We, therefore, executed a research study in two parts. In the first part, we applied IS theory perspectives and adopted the taxonomy and structural components of theory to argue that EA, as represented by the Zachman Framework for Enterprise Architecture (ZFEA), could be adopted as an explanatory IS theory. In the second part of the study, we subsequently analysed multiple case studies from this theoretical basis to investigate whether distinguishable focus patterns could be detected during SME growth. The final results provide evidence that EA, represented through an appropriate framework like the ZFEA, could serve as an explanatory theory for SMEs during start-up, growth and transformation. We identified focus patterns and from these results, it should be possible to understand and explain how SMEs grow. Positioning the ZFEA as explanatory IS theory provides insight into the role and purpose of the ZFEA (and by extension EA), and could assist researchers and practitioners with mediating the challenges experienced by SMEs, and, by extension, enhance sustainable development.
@article{440, author = {Aurona Gerber and Pierre le Roux and Alta van der Merwe}, title = {Enterprise Architecture as Explanatory Information Systems Theory for Understanding Small- and Medium-Sized Enterprise Growth}, abstract = {Understanding and explaining small- and medium-sized enterprise (SME) growth is important for sustainability from multiple perspectives. Research indicates that SMEs comprise more than 80% of most economies, and their cumulative impact on sustainability considerations is far from trivial. In addition, for sustainability concerns to be prioritized, an SME has to be successful over time. In most developing countries, SMEs play a major role in solving socio-economic challenges. SMEs are an active research topic within the information systems (IS) discipline, often within the enterprise architecture (EA) domain. EA fundamentally adopts a systems perspective to describe the essential elements of a socio-technical organization and their relationships to each other and to the environment in order to understand complexity and manage change. However, despite rapid adoption originally, EA research and practice often fails to deliver on expectations. In some circles, EA became synonymous with projects that are over-budget, over-time and costly without the expected return on investment. In this paper, we argue that EA remains indispensable for understanding and explaining enterprises and that we fundamentally need to revisit some of the applications of EA. We, therefore, executed a research study in two parts. In the first part, we applied IS theory perspectives and adopted the taxonomy and structural components of theory to argue that EA, as represented by the Zachman Framework for Enterprise Architecture (ZFEA), could be adopted as an explanatory IS theory. In the second part of the study, we subsequently analysed multiple case studies from this theoretical basis to investigate whether distinguishable focus patterns could be detected during SME growth. The final results provide evidence that EA, represented through an appropriate framework like the ZFEA, could serve as an explanatory theory for SMEs during start-up, growth and transformation. We identified focus patterns and from these results, it should be possible to understand and explain how SMEs grow. Positioning the ZFEA as explanatory IS theory provides insight into the role and purpose of the ZFEA (and by extension EA), and could assist researchers and practitioners with mediating the challenges experienced by SMEs, and, by extension, enhance sustainable development.}, year = {2020}, journal = {Sustainability}, volume = {12}, issue = {20}, isbn = {2071-1050}, url = {https://www.mdpi.com/2071-1050/12/20/8517}, doi = {10.3390/su12208517}, }
The past 25 years have seen many attempts to introduce defeasible-reasoning capabilities into a description logic setting. Many, if not most, of these attempts are based on preferential extensions of description logics, with a significant number of these, in turn, following the so-called KLM approach to defeasible reasoning initially advocated for propositional logic by Kraus, Lehmann, and Magidor. Each of these attempts has its own aim of investigating particular constructions and variants of the (KLM-style) preferential approach. Here our aim is to provide a comprehensive study of the formal foundations of preferential defeasible reasoning for description logics in the KLM tradition. We start by investigating a notion of defeasible subsumption in the spirit of defeasible conditionals as studied by Kraus, Lehmann, and Magidor in the propositional case. In particular, we consider a natural and intuitive semantics for defeasible subsumption, and we investigate KLM-style syntactic properties for both preferential and rational subsumption. Our contribution includes two representation results linking our semantic constructions to the set of preferential and rational properties considered. Besides showing that our semantics is appropriate, these results pave the way for more effective decision procedures for defeasible reasoning in description logics. Indeed, we also analyse the problem of non-monotonic reasoning in description logics at the level of entailment and present an algorithm for the computation of rational closure of a defeasible knowledge base. Importantly, our algorithm relies completely on classical entailment and shows that the computational complexity of reasoning over defeasible knowledge bases is no worse than that of reasoning in the underlying classical DL ALC.
@article{433, author = {Katarina Britz and Giovanni Casini and Tommie Meyer and Kody Moodley and Uli Sattler and Ivan Varzinczak}, title = {Principles of KLM-style Defeasible Description Logics}, abstract = {The past 25 years have seen many attempts to introduce defeasible-reasoning capabilities into a description logic setting. Many, if not most, of these attempts are based on preferential extensions of description logics, with a significant number of these, in turn, following the so-called KLM approach to defeasible reasoning initially advocated for propositional logic by Kraus, Lehmann, and Magidor. Each of these attempts has its own aim of investigating particular constructions and variants of the (KLM-style) preferential approach. Here our aim is to provide a comprehensive study of the formal foundations of preferential defeasible reasoning for description logics in the KLM tradition. We start by investigating a notion of defeasible subsumption in the spirit of defeasible conditionals as studied by Kraus, Lehmann, and Magidor in the propositional case. In particular, we consider a natural and intuitive semantics for defeasible subsumption, and we investigate KLM-style syntactic properties for both preferential and rational subsumption. Our contribution includes two representation results linking our semantic constructions to the set of preferential and rational properties considered. Besides showing that our semantics is appropriate, these results pave the way for more effective decision procedures for defeasible reasoning in description logics. Indeed, we also analyse the problem of non-monotonic reasoning in description logics at the level of entailment and present an algorithm for the computation of rational closure of a defeasible knowledge base. Importantly, our algorithm relies completely on classical entailment and shows that the computational complexity of reasoning over defeasible knowledge bases is no worse than that of reasoning in the underlying classical DL ALC.}, year = {2020}, journal = {Transactions on Computational Logic}, volume = {22 (1)}, pages = {1-46}, publisher = {ACM}, url = {https://dl-acm-org.ezproxy.uct.ac.za/doi/abs/10.1145/3420258}, doi = {10.1145/3420258}, }
Description logics (DLs) are well-known knowledge representation formalisms focused on the representation of terminological knowledge. Due to their first-order semantics, these languages (in their classical form) are not suitable for representing and handling uncertainty. A probabilistic extension of a light-weight DL was recently proposed for dealing with certain knowledge occurring in uncertain contexts. In this paper, we continue that line of research by introducing the Bayesian extension BALC of the propositionally closed DL ALC. We present a tableau-based procedure for deciding consistency and adapt it to solve other probabilistic, contextual, and general inferences in this logic. We also show that all these problems remain ExpTime-complete, the same as reasoning in the underlying classical ALC.
@article{432, author = {Leonard Botha and Tommie Meyer and Rafael Penaloza}, title = {The Probabilistic Description Logic BALC}, abstract = {Description logics (DLs) are well-known knowledge representation formalisms focused on the representation of terminological knowledge. Due to their first-order semantics, these languages (in their classical form) are not suitable for representing and handling uncertainty. A probabilistic extension of a light-weight DL was recently proposed for dealing with certain knowledge occurring in uncertain contexts. In this paper, we continue that line of research by introducing the Bayesian extension BALC of the propositionally closed DL ALC. We present a tableau-based procedure for deciding consistency and adapt it to solve other probabilistic, contextual, and general inferences in this logic. We also show that all these problems remain ExpTime-complete, the same as reasoning in the underlying classical ALC.}, year = {2020}, journal = {Theory and Practice of Logic Programming}, pages = {1-24}, publisher = {Cambridge University Press}, doi = {10.1017/S1471068420000460}, }
Not applicable.
@{431, author = {Stefan Borgwardt and Tommie Meyer}, title = {Proceedings of the 33rd International Workshop on Description Logics (DL 2020)}, abstract = {Not applicable.}, year = {2020}, journal = {33rd International Workshop on Description Logics (DL 2020)}, month = {12/09/2020-14/09/2020}, address = {Online}, url = {http://ceur-ws.org/Vol-2663/}, }
The field of defeasible reasoning has a variety of frameworks, all of which are constructed with the view of codifying the patterns of common-sense reasoning inherent to human reasoning. One of these frameworks was first described by Kraus, Lehmann and Magidor, and is accordingly referred to as the KLM framework. Initially defined in propositional logic, it has since been imported into description and modal logics, and implemented into many defeasible reasoning engines. However, there are many ways in which this framework may be advanced theoretically, and many opportunities for it to be applied. This paper covers some of the most prominent areas of future work and possible applications of this framework, with the intention that anyone who has recently familiarized themselves with this approach may then have an understanding of the kind of work in which they could engage.
@{414, author = {Adam Kaliski and Tommie Meyer}, title = {Quo Vadis KLM-style Defeasible Reasoning?}, abstract = {The field of defeasible reasoning has a variety of frameworks, all of which are constructed with the view of codifying the patterns of common-sense reasoning inherent to human reasoning. One of these frameworks was first described by Kraus, Lehmann and Magidor, and is accordingly referred to as the KLM framework. Initially defined in propositional logic, it has since been imported into description and modal logics, and implemented into many defeasible reasoning engines. However, there are many ways in which this framework may be advanced theoretically, and many opportunities for it to be applied. This paper covers some of the most prominent areas of future work and possible applications of this framework, with the intention that anyone who has recently familiarized themselves with this approach may then have an understanding of the kind of work in which they could engage.}, year = {2020}, journal = {First Southern African Conference for Artificial Intelligence Research}, pages = {231-246}, month = {22/02/2021}, publisher = {SACAIR2020}, address = {Virtual}, isbn = {978-0-620-89373-2}, url = {https://2020.sacair.org.za/wp-content/uploads/2021/02/SACAIR_Proceedings-MainBook_Finv4_compressed.pdf?_ga=2.116601743.849395099.1621802506-572599210.1621419278}, }
Propositional KLM-style defeasible reasoning involves extending propositional logic with a new logical connective that can express defeasible (or conditional) implications, with semantics given by ordered structures known as ranked interpretations. KLM-style defeasible entailment is referred to as rational whenever the defeasible entailment relation under consideration generates a set of defeasible implications all satisfying a set of rationality postulates known as the KLM postulates. In a recent paper Booth et al. proposed PTL, a logic that is more expressive than the core KLM logic. They proved an impossibility result, showing that defeasible entailment for PTL fails to satisfy a set of rationality postulates similar in spirit to the KLM postulates. Their interpretation of the impossibility result is that defeasible entailment for PTL need not be unique. In this paper we continue the line of research in which the expressivity of the core KLM logic is extended. We present the logic Boolean KLM (BKLM) in which we allow for disjunctions, conjunctions, and negations, but not nesting, of defeasible implications. Our contribution is twofold. Firstly, we show (perhaps surprisingly) that BKLM is more expressive than PTL. Our proof is based on the fact that BKLM can characterise all single ranked interpretations, whereas PTL cannot. Secondly, given that the PTL impossibility result also applies to BKLM, we adapt the different forms of PTL entailment proposed by Booth et al. to apply to BKLM.
@{413, author = {Guy Paterson-Jones and Tommie Meyer}, title = {A Boolean Extension of KLM-style Conditional Reasoning}, abstract = {Propositional KLM-style defeasible reasoning involves extending propositional logic with a new logical connective that can express defeasible (or conditional) implications, with semantics given by ordered structures known as ranked interpretations. KLM-style defeasible entailment is referred to as rational whenever the defeasible entailment relation under consideration generates a set of defeasible implications all satisfying a set of rationality postulates known as the KLM postulates. In a recent paper Booth et al. proposed PTL, a logic that is more expressive than the core KLM logic. They proved an impossibility result, showing that defeasible entailment for PTL fails to satisfy a set of rationality postulates similar in spirit to the KLM postulates. Their interpretation of the impossibility result is that defeasible entailment for PTL need not be unique. In this paper we continue the line of research in which the expressivity of the core KLM logic is extended. We present the logic Boolean KLM (BKLM) in which we allow for disjunctions, conjunctions, and negations, but not nesting, of defeasible implications. Our contribution is twofold. Firstly, we show (perhaps surprisingly) that BKLM is more expressive than PTL. Our proof is based on the fact that BKLM can characterise all single ranked interpretations, whereas PTL cannot. Secondly, given that the PTL impossibility result also applies to BKLM, we adapt the different forms of PTL entailment proposed by Booth et al. to apply to BKLM.}, year = {2020}, journal = {First Southern African Conference for AI Research (SACAIR 2020)}, pages = {236-252}, month = {22/02/2021-26/02/2021}, publisher = {Springer}, address = {Muldersdrift, South Africa}, isbn = {978-3-030-66151-9}, url = {https://link.springer.com/book/10.1007/978-3-030-66151-9}, doi = {10.1007/978-3-030-66151-9_15}, }
Classical logic forms the basis of knowledge representation and reasoning in AI. In the real world, however, classical logic alone is insufficient to describe the reasoning behaviour of human beings. It lacks the flexibility so characteristically required of reasoning under uncertainty, reasoning under incomplete information and reasoning with new information, as humans must. In response, non-classical extensions to propositional logic have been formulated, to provide non-monotonicity. It has been shown in previous studies that human reasoning exhibits non-monotonicity. This work is the product of merging three independent studies, each one focusing on a different formalism for non-monotonic reasoning: KLM defeasible reasoning, AGM belief revision and KM belief update. We investigate, for each of the postulates propounded to characterise these logic forms, the extent to which they have correspondence with human reasoners. We do this via three respective experiments and present each of the postulates in concrete and abstract form. We discuss related work, our experiment design, testing and evaluation, and report on the results from our experiments. We find evidence to believe that 1 out of 5 KLM defeasible reasoning postulates, 3 out of 8 AGM belief revision postulates and 4 out of 8 KM belief update postulates conform in both the concrete and abstract case. For each experiment, we performed an additional investigation. In the experiments of KLM defeasible reasoning and AGM belief revision, we analyse the explanations given by participants to determine whether the postulates have a normative or descriptive relationship with human reasoning. We find evidence that suggests, overall, KLM defeasible reasoning has a normative relationship with human reasoning while AGM belief revision has a descriptive relationship with human reasoning. In the experiment of KM belief update, we discuss counter-examples to the KM postulates.
@{412, author = {Clayton Baker and Claire Denny and Paul Freund and Tommie Meyer}, title = {Cognitive Defeasible Reasoning: the Extent to which Forms of Defeasible Reasoning Correspond with Human Reasoning}, abstract = {Classical logic forms the basis of knowledge representation and reasoning in AI. In the real world, however, classical logic alone is insufficient to describe the reasoning behaviour of human beings. It lacks the flexibility so characteristically required of reasoning under uncertainty, reasoning under incomplete information and reasoning with new information, as humans must. In response, non-classical extensions to propositional logic have been formulated, to provide non-monotonicity. It has been shown in previous studies that human reasoning exhibits non-monotonicity. This work is the product of merging three independent studies, each one focusing on a different formalism for non-monotonic reasoning: KLM defeasible reasoning, AGM belief revision and KM belief update. We investigate, for each of the postulates propounded to characterise these logic forms, the extent to which they have correspondence with human reasoners. We do this via three respective experiments and present each of the postulates in concrete and abstract form. We discuss related work, our experiment design, testing and evaluation, and report on the results from our experiments. We find evidence to believe that 1 out of 5 KLM defeasible reasoning postulates, 3 out of 8 AGM belief revision postulates and 4 out of 8 KM belief update postulates conform in both the concrete and abstract case. For each experiment, we performed an additional investigation. In the experiments of KLM defeasible reasoning and AGM belief revision, we analyse the explanations given by participants to determine whether the postulates have a normative or descriptive relationship with human reasoning. We find evidence that suggests, overall, KLM defeasible reasoning has a normative relationship with human reasoning while AGM belief revision has a descriptive relationship with human reasoning. In the experiment of KM belief update, we discuss counter-examples to the KM postulates.}, year = {2020}, journal = {First Southern African Conference for AI Research (SACAIR 2020)}, pages = {199-219}, month = {22/02/2021-26/02/2021}, publisher = {Springer}, address = {Muldersdrift, South Africa}, isbn = {978-3-030-66151-9}, url = {https://link.springer.com/book/10.1007/978-3-030-66151-9}, doi = {10.1007/978-3-030-66151-9_13}, }
Datalog is a powerful language that can be used to represent explicit knowledge and compute inferences in knowledge bases. Datalog cannot, however, represent or reason about contradictory rules. This is a limitation as contradictions are often present in domains that contain exceptions. In this paper, we extend Datalog to represent contradictory and defeasible information. We define an approach to efficiently reason about contradictory information in Datalog and show that it satisfies the KLM requirements for a rational consequence relation. We introduce DDLV, a defeasible Datalog reasoning system that implements this approach. Finally, we evaluate the performance of DDLV.
@article{411, author = {Michael Harrison and Tommie Meyer}, title = {DDLV: A System for rational preferential reasoning for datalog}, abstract = {Datalog is a powerful language that can be used to represent explicit knowledge and compute inferences in knowledge bases. Datalog cannot, however, represent or reason about contradictory rules. This is a limitation as contradictions are often present in domains that contain exceptions. In this paper, we extend Datalog to represent contradictory and defeasible information. We define an approach to efficiently reason about contradictory information in Datalog and show that it satisfies the KLM requirements for a rational consequence relation. We introduce DDLV, a defeasible Datalog reasoning system that implements this approach. Finally, we evaluate the performance of DDLV.}, year = {2020}, journal = {South African Computer Journal}, volume = {32}, pages = {184-217}, issue = {2}, publisher = {SACJ}, address = {Online}, isbn = {ISSN 2313-7835}, doi = {10.18489/sacj.v32i2.850}, }
Deontic logic is a logic often used to formalise scenarios in the legal domain. Within the legal domain there are many exceptions and conflicting obligations. This motivates the enrichment of deontic logic with not only the notion of defeasibility, which allows for reasoning about exceptions, but a stronger notion of typicality that is based on defeasibility. KLM-style defeasible reasoning is a logic system that employs defeasibility while Propositional Typicality Logic (PTL) is a logic that does the same for the notion of typicality. Deontic paradoxes are often used to examine logic systems as the paradoxes provide undesirable results even if the scenarios seem intuitive. Forrester’s paradox is one of the most famous of these paradoxes. This paper shows that KLM-style defeasible reasoning and PTL can be used to represent and reason with Forrester’s paradox in such a way as to block undesirable conclusions without completely sacrificing desirable deontic properties.
@article{410, author = {Julian Chingoma and Tommie Meyer}, title = {Defeasibility applied to Forrester’s paradox}, abstract = {Deontic logic is a logic often used to formalise scenarios in the legal domain. Within the legal domain there are many exceptions and conflicting obligations. This motivates the enrichment of deontic logic with not only the notion of defeasibility, which allows for reasoning about exceptions, but a stronger notion of typicality that is based on defeasibility. KLM-style defeasible reasoning is a logic system that employs defeasibility while Propositional Typicality Logic (PTL) is a logic that does the same for the notion of typicality. Deontic paradoxes are often used to examine logic systems as the paradoxes provide undesirable results even if the scenarios seem intuitive. Forrester’s paradox is one of the most famous of these paradoxes. This paper shows that KLM-style defeasible reasoning and PTL can be used to represent and reason with Forrester’s paradox in such a way as to block undesirable conclusions without completely sacrificing desirable deontic properties.}, year = {2020}, journal = {South African Computer Journal}, volume = {32}, pages = {161-183}, issue = {2}, publisher = {SACJ}, address = {Online}, isbn = {ISSN 2313-7835}, doi = {10.18489/sacj.v32i2.848}, }
Datalog is a declarative logic programming language that uses classical logical reasoning as its basic form of reasoning. Defeasible reasoning is a form of non-classical reasoning that is able to deal with exceptions to general assertions in a formal manner. The KLM approach to defeasible reasoning is an axiomatic approach based on the concept of plausible inference. Since Datalog uses classical reasoning, it is currently not able to handle defeasible implications and exceptions. We aim to extend the expressivity of Datalog by incorporating KLM-style defeasible reasoning into classical Datalog. We present a systematic approach for extending the KLM properties and a well-known form of defeasible entailment: Rational Closure. We conclude by exploring Datalog extensions of less conservative forms of defeasible entailment: Relevant and Lexicographic Closure. We provide algorithmic definitions for these forms of defeasible entailment and prove that the definitions are LM-rational.
@article{409, author = {Matthew Morris and Tala Ross and Tommie Meyer}, title = {Algorithmic definitions for KLM-style defeasible disjunctive Datalog}, abstract = {Datalog is a declarative logic programming language that uses classical logical reasoning as its basic form of reasoning. Defeasible reasoning is a form of non-classical reasoning that is able to deal with exceptions to general assertions in a formal manner. The KLM approach to defeasible reasoning is an axiomatic approach based on the concept of plausible inference. Since Datalog uses classical reasoning, it is currently not able to handle defeasible implications and exceptions. We aim to extend the expressivity of Datalog by incorporating KLM-style defeasible reasoning into classical Datalog. We present a systematic approach for extending the KLM properties and a well-known form of defeasible entailment: Rational Closure. We conclude by exploring Datalog extensions of less conservative forms of defeasible entailment: Relevant and Lexicographic Closure. We provide algorithmic definitions for these forms of defeasible entailment and prove that the definitions are LM-rational.}, year = {2020}, journal = {South African Computer Journal}, volume = {32}, pages = {141-160}, issue = {2}, publisher = {SACJ}, address = {Online}, isbn = {ISSN 2313-7835}, doi = {10.18489/sacj.v32i2.846}, }
Clustering is frequently used in the energy domain to identify dominant electricity consumption patterns of households, which can be used to construct customer archetypes for long term energy planning. Selecting a useful set of clusters however requires extensive experimentation and domain knowledge. While internal clustering validation measures are well established in the electricity domain, they are limited for selecting useful clusters. Based on an application case study in South Africa, we present an approach for formalising implicit expert knowledge as external evaluation measures to create customer archetypes that capture variability in residential electricity consumption behaviour. By combining internal and external validation measures in a structured manner, we were able to evaluate clustering structures based on the utility they present for our application. We validate the selected clusters in a use case where we successfully reconstruct customer archetypes previously developed by experts. Our approach shows promise for transparent and repeatable cluster ranking and selection by data scientists, even if they have limited domain knowledge.
@article{408, author = {Wiebke Toussaint and Deshen Moodley}, title = {Clustering Residential Electricity Consumption Data to Create Archetypes that Capture Household Behaviour in South Africa}, abstract = {Clustering is frequently used in the energy domain to identify dominant electricity consumption patterns of households, which can be used to construct customer archetypes for long term energy planning. Selecting a useful set of clusters however requires extensive experimentation and domain knowledge. While internal clustering validation measures are well established in the electricity domain, they are limited for selecting useful clusters. Based on an application case study in South Africa, we present an approach for formalising implicit expert knowledge as external evaluation measures to create customer archetypes that capture variability in residential electricity consumption behaviour. By combining internal and external validation measures in a structured manner, we were able to evaluate clustering structures based on the utility they present for our application. We validate the selected clusters in a use case where we successfully reconstruct customer archetypes previously developed by experts. Our approach shows promise for transparent and repeatable cluster ranking and selection by data scientists, even if they have limited domain knowledge.}, year = {2020}, journal = {South African Computer Journal}, volume = {32}, pages = {1-34}, issue = {2}, publisher = {SACJ}, address = {Online}, isbn = {ISSN 2313-7835}, url = {http://www.scielo.org.za/scielo.php?pid=S2313-78352020000200003&script=sci_arttext&tlng=en}, doi = {http://dx.doi.org/10.18489/sacj.v32i2.845}, }
Recently, a hybrid Deep Neural Network (DNN) algorithm, TreNet was proposed for predicting trends in time series data. While TreNet was shown to have superior performance for trend prediction to other DNN and traditional ML approaches, the validation method used did not take into account the sequential nature of time series datasets and did not deal with model update. In this research we replicated the TreNet experiments on the same datasets using a walk-forward validation method and tested our best model over multiple independent runs to evaluate model stability. We compared the performance of the hybrid TreNet algorithm, on four datasets to vanilla DNN algorithms that take in point data, and also to traditional ML algorithms. We found that in general TreNet still performs better than the vanilla DNN models, but not on all datasets as reported in the original TreNet study. This study highlights the importance of using an appropriate validation method and evaluating model stability for evaluating and developing machine learning models for trend prediction in time series data.
@{407, author = {Kouame Kouassi and Deshen Moodley}, title = {An Analysis of Deep Neural Networks for Predicting Trends in Time Series Data}, abstract = {Recently, a hybrid Deep Neural Network (DNN) algorithm, TreNet was proposed for predicting trends in time series data. While TreNet was shown to have superior performance for trend prediction to other DNN and traditional ML approaches, the validation method used did not take into account the sequential nature of time series datasets and did not deal with model update. In this research we replicated the TreNet experiments on the same datasets using a walk-forward validation method and tested our best model over multiple independent runs to evaluate model stability. We compared the performance of the hybrid TreNet algorithm, on four datasets to vanilla DNN algorithms that take in point data, and also to traditional ML algorithms. We found that in general TreNet still performs better than the vanilla DNN models, but not on all datasets as reported in the original TreNet study. This study highlights the importance of using an appropriate validation method and evaluating model stability for evaluating and developing machine learning models for trend prediction in time series data.}, year = {2020}, journal = {First Southern African Conference for AI Research (SACAIR 2020)}, pages = {119-140}, month = {22/02/2021}, publisher = {Springer}, address = {Virtual}, isbn = {978-3-030-66151-9}, url = {https://link.springer.com/book/10.1007/978-3-030-66151-9}, doi = {https://doi.org/10.1007/978-3-030-66151-9_8}, }
Knowledge Discovery and Evolution (KDE) is of interest to a broad array of researchers from both Philosophy of Science (PoS) and Artificial Intelligence (AI), in particular, Knowledge Representation and Reasoning (KR), Machine Learning and Data Mining (ML-DM) and the Agent Based Systems (ABS) communities. In PoS, Haig recently pro- posed a so-called broad theory of scientific method that uses abduction for generating theories to explain phenomena. He refers to this method of scientific inquiry as the Abductive Theory of Method (ATOM). In this paper, we analyse ATOM, align it with KR and ML-DM perspectives and propose an algorithm and an ontology for supporting agent based knowledge discovery and evolution based on ATOM. We illustrate the use of the algorithm and the ontology on a use case application for electricity consumption behaviour in residential households.
@{405, author = {Tezira Wanyana and Deshen Moodley and Tommie Meyer}, title = {An Ontology for Supporting Knowledge Discovery and Evolution}, abstract = {Knowledge Discovery and Evolution (KDE) is of interest to a broad array of researchers from both Philosophy of Science (PoS) and Artificial Intelligence (AI), in particular, Knowledge Representation and Reasoning (KR), Machine Learning and Data Mining (ML-DM) and the Agent Based Systems (ABS) communities. In PoS, Haig recently pro- posed a so-called broad theory of scientific method that uses abduction for generating theories to explain phenomena. He refers to this method of scientific inquiry as the Abductive Theory of Method (ATOM). In this paper, we analyse ATOM, align it with KR and ML-DM perspectives and propose an algorithm and an ontology for supporting agent based knowledge discovery and evolution based on ATOM. We illustrate the use of the algorithm and the ontology on a use case application for electricity consumption behaviour in residential households.}, year = {2020}, journal = {First Southern African Conference for Artificial Intelligence Research}, pages = {206-221}, month = {22/02/2021}, publisher = {SACAIR2020}, address = {Virtual}, isbn = {978-0-620-89373-2}, url = {https://2020.sacair.org.za/wp-content/uploads/2021/02/SACAIR_Proceedings-MainBook_Finv4_compressed.pdf?_ga=2.116601743.849395099.1621802506-572599210.1621419278}, }
One of the fundamental assumptions of machine learning is that learnt models are applied to data that is identically distributed to the training data. This assumption is often not realistic: for example, data collected from a single source at different times may not be distributed identically, due to sampling bias or changes in the environment. We propose a new architecture called a meta-model which predicts performance for unseen models. This approach is applicable when several ‘proxy’ datasets are available to train a model to be deployed on a ‘target’ test set; the architecture is used to identify which regression algorithms should be used as well as which datasets are most useful to train for a given target dataset. Finally, we demonstrate the strengths and weaknesses of the proposed meta-model by making use of artificially generated datasets using a variation of the Friedman method 3 used to generate artificial regression datasets, and discuss real-world applications of our approach.
@{404, author = {Dylan Lamprecht and Etienne Barnard}, title = {Using a meta-model to compensate for training-evaluation mismatches}, abstract = {One of the fundamental assumptions of machine learning is that learnt models are applied to data that is identically distributed to the training data. This assumption is often not realistic: for example, data collected from a single source at different times may not be distributed identically, due to sampling bias or changes in the environment. We propose a new architecture called a meta-model which predicts performance for unseen models. This approach is applicable when several ‘proxy’ datasets are available to train a model to be deployed on a ‘target’ test set; the architecture is used to identify which regression algorithms should be used as well as which datasets are most useful to train for a given target dataset. Finally, we demonstrate the strengths and weaknesses of the proposed meta-model by making use of artificially generated datasets using a variation of the Friedman method 3 used to generate artificial regression datasets, and discuss real-world applications of our approach.}, year = {2020}, journal = {Southern African Conference for Artificial Intelligence Research}, pages = {321-334}, month = {22/02/2021 - 26/02/2021}, address = {South Africa}, isbn = {978-0-620-89373-2}, url = {https://sacair.org.za/proceedings/}, }
Word embeddings are widely used in natural language processing (NLP) tasks. Most work on word embeddings focuses on monolingual languages with large available datasets. For embeddings to be useful in a multilingual environment, as in South Africa, the training techniques have to be adjusted to cater for a) multiple languages, b) smaller datasets and c) the occurrence of code-switching. One of the biggest roadblocks is to obtain datasets that include examples of natural code-switching, since code switching is generally avoided in written material. A solution to this problem is to use speech recognised data. Embedding packages like Word2Vec and GloVe have default hyper-parameter settings that are usually optimised for training on large datasets and evaluation on analogy tasks. When using embeddings for problems such as text classification in our multilingual environment, the hyper-parameters have to be optimised for the specific data and task. We investigate the importance of optimising relevant hyper-parameters for training word embeddings with speech recognised data, where code-switching occurs, and evaluate against the real-world problem of classifying radio and television recordings with code switching. We compare these models with a bag of words baseline model as well as a pre-trained GloVe model.
@{403, author = {Nuette Heyns and Etienne Barnard}, title = {Optimising word embeddings for recognised multilingual speech}, abstract = {Word embeddings are widely used in natural language processing (NLP) tasks. Most work on word embeddings focuses on monolingual languages with large available datasets. For embeddings to be useful in a multilingual environment, as in South Africa, the training techniques have to be adjusted to cater for a) multiple languages, b) smaller datasets and c) the occurrence of code-switching. One of the biggest roadblocks is to obtain datasets that include examples of natural code-switching, since code switching is generally avoided in written material. A solution to this problem is to use speech recognised data. Embedding packages like Word2Vec and GloVe have default hyper-parameter settings that are usually optimised for training on large datasets and evaluation on analogy tasks. When using embeddings for problems such as text classification in our multilingual environment, the hyper-parameters have to be optimised for the specific data and task. We investigate the importance of optimising relevant hyper-parameters for training word embeddings with speech recognised data, where code-switching occurs, and evaluate against the real-world problem of classifying radio and television recordings with code switching. We compare these models with a bag of words baseline model as well as a pre-trained GloVe model.}, year = {2020}, journal = {Southern African Conference for Artificial Intelligence Research}, pages = {102-116}, month = {22/02/2021 - 26/02/2021}, address = {South Africa}, isbn = {978-0-620-89373-2}, url = {https://sacair.org.za/proceedings/}, }
Each node in a neural network is trained to activate for a specific region in the input domain. Any training samples that fall within this domain are therefore implicitly clustered together. Recent work has highlighted the importance of these clusters during the training process but has not yet investigated their evolution during training. Towards this goal, we train several ReLU-activated MLPs on a simple classification task (MNIST) and show that a consistent training process emerges: (1) sample clusters initially increase in size and then decrease as training progresses, (2) the size of sample clusters in the first layer decreases more rapidly than in deeper layers, (3) binary node activations, especially of nodes in deeper layers, become more sensitive to class membership as training progresses, (4) individual nodes remain poor predictors of class membership, even if accurate when applied as a group. We report on the detail of these findings and interpret them from the perspective of a high-dimensional clustering process.
@{402, author = {Daniël Haasbroek and Marelie Davel}, title = {Exploring neural network training dynamics through binary node activations}, abstract = {Each node in a neural network is trained to activate for a specific region in the input domain. Any training samples that fall within this domain are therefore implicitly clustered together. Recent work has highlighted the importance of these clusters during the training process but has not yet investigated their evolution during training. Towards this goal, we train several ReLU-activated MLPs on a simple classification task (MNIST) and show that a consistent training process emerges: (1) sample clusters initially increase in size and then decrease as training progresses, (2) the size of sample clusters in the first layer decreases more rapidly than in deeper layers, (3) binary node activations, especially of nodes in deeper layers, become more sensitive to class membership as training progresses, (4) individual nodes remain poor predictors of class membership, even if accurate when applied as a group. We report on the detail of these findings and interpret them from the perspective of a high-dimensional clustering process.}, year = {2020}, journal = {Southern African Conference for Artificial Intelligence Research}, pages = {304-320}, month = {22/02/2021 - 26/02/2021}, address = {South Africa}, isbn = {978-0-620-89373-2}, url = {https://sacair.org.za/proceedings/}, }
When training neural networks as classifiers, it is common to observe an increase in average test loss while still maintaining or improving the overall classification accuracy on the same dataset. In spite of the ubiquity of this phenomenon, it has not been well studied and is often dismissively attributed to an increase in borderline correct classifications. We present an empirical investigation that shows how this phenomenon is actually a result of the differential manner by which test samples are processed. In essence: test loss does not increase overall, but only for a small minority of samples. Large representational capacities allow losses to decrease for the vast majority of test samples at the cost of extreme increases for others. This effect seems to be mainly caused by increased parameter values relating to the correctly processed sample features. Our findings contribute to the practical understanding of a common behaviour of deep neural networks. We also discuss the implications of this work for network optimisation and generalisation.
@article{484, author = {Arthur Venter and Marthinus Theunissen and Marelie Davel}, title = {Pre-interpolation loss behaviour in neural networks}, abstract = {When training neural networks as classifiers, it is common to observe an increase in average test loss while still maintaining or improving the overall classification accuracy on the same dataset. In spite of the ubiquity of this phenomenon, it has not been well studied and is often dismissively attributed to an increase in borderline correct classifications. We present an empirical investigation that shows how this phenomenon is actually a result of the differential manner by which test samples are processed. In essence: test loss does not increase overall, but only for a small minority of samples. Large representational capacities allow losses to decrease for the vast majority of test samples at the cost of extreme increases for others. This effect seems to be mainly caused by increased parameter values relating to the correctly processed sample features. Our findings contribute to the practical understanding of a common behaviour of deep neural networks. We also discuss the implications of this work for network optimisation and generalisation.}, year = {2020}, journal = {Communications in Computer and Information Science}, volume = {1342}, pages = {296-309}, publisher = {Southern African Conference for Artificial Intelligence Research}, address = {South Africa}, isbn = {978-3-030-66151-9}, doi = {https://doi.org/10.1007/978-3-030-66151-9_19}, }
Although Convolutional Neural Networks (CNNs) are widely used, their translation invariance (ability to deal with translated inputs) is still subject to some controversy. We explore this question using translation-sensitivity maps to quantify how sensitive a standard CNN is to a translated input. We propose the use of cosine similarity as sensitivity metric over Euclidean distance, and discuss the importance of restricting the dimensionality of either of these metrics when comparing architectures. Our main focus is to investigate the effect of different architectural components of a standard CNN on that network’s sensitivity to translation. By varying convolutional kernel sizes and amounts of zero padding, we control the size of the feature maps produced, allowing us to quantify the extent to which these elements influence translation invariance. We also measure translation invariance at different locations within the CNN to determine the extent to which convolutional and fully connected layers, respectively, contribute to the translation invariance of a CNN as a whole. Our analysis indicates that both convolutional kernel size and feature map size have a systematic influence on translation invariance. We also see that convolutional layers contribute less than expected to translation invariance, when not specifically forced to do so.
@article{485, author = {Johannes Myburgh and Coenraad Mouton and Marelie Davel}, title = {Tracking translation invariance in CNNs}, abstract = {Although Convolutional Neural Networks (CNNs) are widely used, their translation invariance (ability to deal with translated inputs) is still subject to some controversy. We explore this question using translation-sensitivity maps to quantify how sensitive a standard CNN is to a translated input. We propose the use of cosine similarity as sensitivity metric over Euclidean distance, and discuss the importance of restricting the dimensionality of either of these metrics when comparing architectures. Our main focus is to investigate the effect of different architectural components of a standard CNN on that network’s sensitivity to translation. By varying convolutional kernel sizes and amounts of zero padding, we control the size of the feature maps produced, allowing us to quantify the extent to which these elements influence translation invariance. We also measure translation invariance at different locations within the CNN to determine the extent to which convolutional and fully connected layers, respectively, contribute to the translation invariance of a CNN as a whole. Our analysis indicates that both convolutional kernel size and feature map size have a systematic influence on translation invariance. We also see that convolutional layers contribute less than expected to translation invariance, when not specifically forced to do so.}, year = {2020}, journal = {Communications in Computer and Information Science}, volume = {1342}, pages = {282-295}, publisher = {Southern African Conference for Artificial Intelligence Research}, isbn = {978-3-030-66151-9}, doi = {https://doi.org/10.1007/978-3-030-66151-9_18}, }
Convolutional Neural Networks have become the standard for image classification tasks, however, these architectures are not invariant to translations of the input image. This lack of invariance is attributed to the use of stride which subsamples the input, resulting in a loss of information, and fully connected layers which lack spatial reasoning. We show that stride can greatly benefit translation invariance given that it is combined with sufficient similarity between neighbouring pixels, a characteristic which we refer to as local homogeneity. We also observe that this characteristic is dataset-specific and dictates the relationship between pooling kernel size and stride required for translation invariance. Furthermore we find that a trade-off exists between generalization and translation invariance in the case of pooling kernel size, as larger kernel sizes lead to better invariance but poorer generalization. Finally we explore the efficacy of other solutions proposed, namely global average pooling, anti-aliasing, and data augmentation, both empirically and through the lens of local homogeneity.
@article{486, author = {Coenraad Mouton and Johannes Myburgh and Marelie Davel}, title = {Stride and translation invariance in CNNs}, abstract = {Convolutional Neural Networks have become the standard for image classification tasks, however, these architectures are not invariant to translations of the input image. This lack of invariance is attributed to the use of stride which subsamples the input, resulting in a loss of information, and fully connected layers which lack spatial reasoning. We show that stride can greatly benefit translation invariance given that it is combined with sufficient similarity between neighbouring pixels, a characteristic which we refer to as local homogeneity. We also observe that this characteristic is dataset-specific and dictates the relationship between pooling kernel size and stride required for translation invariance. Furthermore we find that a trade-off exists between generalization and translation invariance in the case of pooling kernel size, as larger kernel sizes lead to better invariance but poorer generalization. Finally we explore the efficacy of other solutions proposed, namely global average pooling, anti-aliasing, and data augmentation, both empirically and through the lens of local homogeneity.}, year = {2020}, journal = {Communications in Computer and Information Science}, volume = {1342}, pages = {267-281}, publisher = {Southern African Conference for Artificial Intelligence Research}, address = {South Africa}, isbn = {978-3-030-66151-9}, doi = {https://doi.org/10.1007/978-3-030-66151-9_17}, }
We investigate whether word embeddings using deep neural networks can assist in the analysis of text produced by a speechrecognition system. In particular, we develop algorithms to identify which words are incorrectly detected by a speech-recognition system in broadcast news. The multilingual corpus used in this investigation contains speech from the eleven official South African languages, as well as Hindi. Popular word embedding algorithms such as Word2Vec and fastText are investigated and compared with context-specific embedding representations such as Doc2Vec and non-context specific statistical sentence embedding methods such as term frequency-inverse document frequency (TFIDF), which is used as our baseline method. These various embeddding methods are then used as fixed length input representations for a logistic regression and feed forward neural network classifier. The output is used as an additional categorical input feature to a CatBoost classifier to determine whether the words were correctly recognised. Other methods are also investigated, including a method that uses the word embedding itself and cosine similarity between specific keywords to identify whether a specific keyword was correctly detected. When relying only on the speech-text data, the best result was obtained using the TFIDF document embeddings as input features to a feed forward neural network. Adding the output from the feed forward neural network as an additional feature to the CatBoost classifier did not enhance the classifier’s performance compared to using the non-textual information provided, although adding the output from a weaker classifier was somewhat beneficial
@{398, author = {Rhyno Strydom and Etienne Barnard}, title = {Classifying recognised speech with deep neural networks}, abstract = {We investigate whether word embeddings using deep neural networks can assist in the analysis of text produced by a speechrecognition system. In particular, we develop algorithms to identify which words are incorrectly detected by a speech-recognition system in broadcast news. The multilingual corpus used in this investigation contains speech from the eleven official South African languages, as well as Hindi. Popular word embedding algorithms such as Word2Vec and fastText are investigated and compared with context-specific embedding representations such as Doc2Vec and non-context specific statistical sentence embedding methods such as term frequency-inverse document frequency (TFIDF), which is used as our baseline method. These various embeddding methods are then used as fixed length input representations for a logistic regression and feed forward neural network classifier. The output is used as an additional categorical input feature to a CatBoost classifier to determine whether the words were correctly recognised. Other methods are also investigated, including a method that uses the word embedding itself and cosine similarity between specific keywords to identify whether a specific keyword was correctly detected. When relying only on the speech-text data, the best result was obtained using the TFIDF document embeddings as input features to a feed forward neural network. Adding the output from the feed forward neural network as an additional feature to the CatBoost classifier did not enhance the classifier’s performance compared to using the non-textual information provided, although adding the output from a weaker classifier was somewhat beneficial}, year = {2020}, journal = {Southern African Conference for Artificial Intelligence Research}, pages = {191-205}, month = {22/02/2021 - 26/02/2021}, publisher = {Southern African Conference for Artificial Intelligence Research}, address = {South Africa}, isbn = {978-0-620-89373-2}, }
The understanding of generalisation in machine learning is in a state of flux, in part due to the ability of deep learning models to interpolate noisy training data and still perform appropriately on out-of-sample data, thereby contradicting long-held intuitions about the bias-variance trade off in learning. We expand upon relevant existing work by discussing local attributes of neural network training within the context of a relatively simple framework.We describe how various types of noise can be compensated for within the proposed framework in order to allow the deep learning model to generalise in spite of interpolating spurious function descriptors. Empirically,we support our postulates with experiments involving overparameterised multilayer perceptrons and controlled training data noise. The main insights are that deep learning models are optimised for training data modularly, with different regions in the function space dedicated to fitting distinct types of sample information. Additionally,we show that models tend to fit uncorrupted samples first. Based on this finding, we propose a conjecture to explain an observed instance of the epoch-wise double-descent phenomenon. Our findings suggest that the notion of model capacity needs to be modified to consider the distributed way training data is fitted across sub-units.
@article{394, author = {Marthinus Theunissen and Marelie Davel and Etienne Barnard}, title = {Benign interpolation of noise in deep learning}, abstract = {The understanding of generalisation in machine learning is in a state of flux, in part due to the ability of deep learning models to interpolate noisy training data and still perform appropriately on out-of-sample data, thereby contradicting long-held intuitions about the bias-variance trade off in learning. We expand upon relevant existing work by discussing local attributes of neural network training within the context of a relatively simple framework.We describe how various types of noise can be compensated for within the proposed framework in order to allow the deep learning model to generalise in spite of interpolating spurious function descriptors. Empirically,we support our postulates with experiments involving overparameterised multilayer perceptrons and controlled training data noise. The main insights are that deep learning models are optimised for training data modularly, with different regions in the function space dedicated to fitting distinct types of sample information. Additionally,we show that models tend to fit uncorrupted samples first. Based on this finding, we propose a conjecture to explain an observed instance of the epoch-wise double-descent phenomenon. Our findings suggest that the notion of model capacity needs to be modified to consider the distributed way training data is fitted across sub-units.}, year = {2020}, journal = {South African Computer Journal}, volume = {32}, pages = {80-101}, issue = {2}, publisher = {South African Institute of Computer Scientists and Information Technologists}, isbn = {ISSN: 1015-7999; E:2313-7835}, doi = {https://doi.org/10.18489/sacj.v32i2.833}, }
Feedforward neural networks provide the basis for complex regression models that produce accurate predictions in a variety of applications. However, they generally do not explicitly provide any information about the utility of each of the input parameters in terms of their contribution to model accuracy. With this is mind, we develop the pairwise network, an adaptation to the fully connected feedforward network that allows the ranking of input parameters according to their contribution to the model output. The application is demonstrated in the context of a space physics problem. Geomagnetic storms are multi-day events characterised by significant perturbations to the magnetic field of the Earth, driven by solar activity. Previous storm forecasting efforts typically use solar wind measurements as input parameters to a regression problem tasked with predicting a perturbation index such as the 1-minute cadence symmetric-H (Sym-H) index. We re-visit the task of predicting Sym-H from solar wind parameters, with two 'twists': (i) Geomagnetic storm phase information is incorporated as model inputs and shown to increase prediction performance. (ii) We describe the pairwise network structure and training process - first validating ranking ability on synthetic data, before using the network to analyse the Sym-H problem.
@article{392, author = {Jacques Beukes and Marelie Davel and Stefan Lotz}, title = {Pairwise networks for feature ranking of a geomagnetic storm model}, abstract = {Feedforward neural networks provide the basis for complex regression models that produce accurate predictions in a variety of applications. However, they generally do not explicitly provide any information about the utility of each of the input parameters in terms of their contribution to model accuracy. With this is mind, we develop the pairwise network, an adaptation to the fully connected feedforward network that allows the ranking of input parameters according to their contribution to the model output. The application is demonstrated in the context of a space physics problem. Geomagnetic storms are multi-day events characterised by significant perturbations to the magnetic field of the Earth, driven by solar activity. Previous storm forecasting efforts typically use solar wind measurements as input parameters to a regression problem tasked with predicting a perturbation index such as the 1-minute cadence symmetric-H (Sym-H) index. We re-visit the task of predicting Sym-H from solar wind parameters, with two 'twists': (i) Geomagnetic storm phase information is incorporated as model inputs and shown to increase prediction performance. (ii) We describe the pairwise network structure and training process - first validating ranking ability on synthetic data, before using the network to analyse the Sym-H problem.}, year = {2020}, journal = {South African Computer Journal}, volume = {32}, pages = {35-55}, issue = {2}, publisher = {South African Institute of Computer Scientists and Information Technologists}, isbn = {ISSN: 1015-7999; E:2313-7835}, doi = {https://doi.org/10.18489/sacj.v32i2.860}, }
No framework exists that can explain and predict the generalisation ability of deep neural networks in general circumstances. In fact, this question has not been answered for some of the least complicated of neural network architectures: fully-connected feedforward networks with rectified linear activations and a limited number of hidden layers. For such an architecture, we show how adding a summary layer to the network makes it more amenable to analysis, and allows us to define the conditions that are required to guarantee that a set of samples will all be classified correctly. This process does not describe the generalisation behaviour of these networks,but produces a number of metrics that are useful for probing their learning and generalisation behaviour. We support the analytical conclusions with empirical results, both to confirm that the mathematical guarantees hold in practice, and to demonstrate the use of the analysis process.
@article{391, author = {Marelie Davel}, title = {Using summary layers to probe neural network behaviour}, abstract = {No framework exists that can explain and predict the generalisation ability of deep neural networks in general circumstances. In fact, this question has not been answered for some of the least complicated of neural network architectures: fully-connected feedforward networks with rectified linear activations and a limited number of hidden layers. For such an architecture, we show how adding a summary layer to the network makes it more amenable to analysis, and allows us to define the conditions that are required to guarantee that a set of samples will all be classified correctly. This process does not describe the generalisation behaviour of these networks,but produces a number of metrics that are useful for probing their learning and generalisation behaviour. We support the analytical conclusions with empirical results, both to confirm that the mathematical guarantees hold in practice, and to demonstrate the use of the analysis process.}, year = {2020}, journal = {South African Computer Journal}, volume = {32}, pages = {102-123}, issue = {2}, publisher = {South African Institute of Computer Scientists and Information Technologists}, isbn = {ISSN: 1015-7999; E:2313-7835}, url = {http://hdl.handle.net/10394/36916}, doi = {https://doi.org/10.18489/sacj.v32i2.861}, }