Computational Thinking for AI Research Publications

2019

1.
Daykin J, Groult R, Guesnet Y, et al. Efficient pattern matching in degenerate strings with the Burrows–Wheeler transform. Information Processing Letters. 2019;147. doi:https://doi.org/10.1016/j.ipl.2019.03.003.

A degenerate or indeterminate string on an alphabet SIGMA is a sequence of non-empty subsets of SIGMA . Given a degenerate string t of length n and its Burrows–Wheeler transform we present a new method for searching for a degenerate pattern of length m in t running in O ( mn ) time on a constant size alphabet SIGMA. Furthermore, it is a hybrid pattern matching technique that works on both regular and degenerate strings. A degenerate string is said to be conservative if its number of non-solid letters is upper-bounded by a fixed positive constant q; in this case we show that the search time complexity is O ( qm^2 ) for counting the number of occurrences and O ( qm^2 + occ ) for reporting the found occurrences where occ is the number of occurrences of the pattern in t. Experimental results show that our method performs well in practice.

@article{265,
  author = {J.W. Daykin and R. Groult and Y. Guesnet and T. Lecroq and A. Lefebvre and M. Leonard and L. Mouchard and E. Prieur-Gaston and Bruce Watson},
  title = {Efficient pattern matching in degenerate strings with the Burrows–Wheeler transform},
  abstract = {A degenerate or indeterminate string on an alphabet SIGMA is a sequence of non-empty subsets of SIGMA . Given a degenerate string t of length n and its Burrows–Wheeler transform we present a new method for searching for a degenerate pattern of length m in t running in O ( mn ) time on a constant size alphabet SIGMA. Furthermore, it is a hybrid pattern matching technique that works on both regular and degenerate strings. A degenerate string is said to be conservative if its number of non-solid letters is upper-bounded by a fixed positive constant q; in this case we show that the search time complexity is O ( qm^2 ) for counting
the number of occurrences and O ( qm^2 + occ ) for reporting the found occurrences where occ is the number of occurrences of the pattern in t. Experimental results show that our method performs well in practice.},
  year = {2019},
  journal = {Information Processing Letters},
  volume = {147},
  pages = {82 - 87},
  publisher = {Elsevier},
  doi = {https://doi.org/10.1016/j.ipl.2019.03.003},
}
1.
Runge T, Schaefer I, Knuppel A, Cleophas L, Kourie D, Watson B. Tool Support for Confidentiality-by-Construction. Ada User Journal . 2019;38(2). doi:https://doi.org/10.1145/3375408.3375413.

In many software applications, it is necessary to preserve confidentiality of information. Therefore, security mechanisms are needed to enforce that secret information does not leak to unauthorized users. However, most language-based techniques that enable in- formation flow control work post-hoc, deciding whether a specific program violates a confidentiality policy. In contrast, we proposed in previous work a refinement-based approach to derive programs that preserve confidentiality-by-construction. This approach follows the principles of Dijkstra’s correctness-by-construction. In this extended abstract, we present the implementation and tool support of that refinement-based approach allowing to specify the information flow policies first and to create programs in a simple while language which comply to these policies by construction. In particular, we present the idea of confidentiality-by-construction using an example and discuss the IDE C-CorC supporting this development approach.

@article{263,
  author = {T. Runge and I. Schaefer and A. Knuppel and L.G.W.A. Cleophas and D.G Kourie and Bruce Watson},
  title = {Tool Support for Confidentiality-by-Construction},
  abstract = {In many software applications, it is necessary to preserve confidentiality of information. Therefore, security mechanisms are needed to enforce that secret information does not leak to unauthorized users. However, most language-based techniques that enable in- formation flow control work post-hoc, deciding whether a specific program violates a confidentiality policy. In contrast, we proposed in previous work a refinement-based approach to derive programs that preserve confidentiality-by-construction. This approach follows the principles of Dijkstra’s correctness-by-construction. In this extended abstract, we present the implementation and tool support of that refinement-based approach allowing to specify the information flow policies first and to create programs in a simple while language which comply to these policies by construction. In particular, we present the idea of confidentiality-by-construction using an example and discuss the IDE C-CorC supporting this development approach.},
  year = {2019},
  journal = {Ada User Journal},
  volume = {38},
  pages = {64 - 68},
  issue = {2},
  doi = {https://doi.org/10.1145/3375408.3375413},
}
1.
Runge T, Schaefer I, Cleophas L, Thum T, Kourie D, Watson B. Tool Support for Correctness-by-Construction. In: European Joint Conferences on Theory and Practice of Software (ETAPS). Switzerland: Springer; 2019. doi:https://doi.org/10.1007/978-3-030-16722-6 _ 2.

Correctness-by-Construction (CbC) is an approach to incrementally create formally correct programs guided by pre- and postcondition specifications. A program is created using refinement rules that guarantee the resulting implementation is correct with respect to the specification. Although CbC is supposed to lead to code with a low defect rate, it is not prevalent, especially because appropriate tool support is missing. To promote CbC, we provide tool support for CbC-based program development. We present CorC, a graphical and textual IDE to create programs in a simple while-language following the CbC approach. Starting with a specification, our open source tool supports CbC developers in refining a program by a sequence of refinement steps and in verifying the correctness of these refinement steps using the theorem prover KeY. We evaluated the tool with a set of standard examples on CbC where we reveal errors in the provided specification. The evaluation shows that our tool reduces the verification time in comparison to post-hoc verification.

@{262,
  author = {T. Runge and I. Schaefer and L.G.W.A. Cleophas and T. Thum and D.G Kourie and Bruce Watson},
  title = {Tool Support for Correctness-by-Construction},
  abstract = {Correctness-by-Construction (CbC) is an approach to incrementally create formally correct programs guided by pre- and postcondition specifications. A program is created using refinement rules that guarantee the resulting implementation is correct with respect to the specification. Although CbC is supposed to lead to code with a low defect rate, it is not prevalent, especially because appropriate tool support is missing. To promote CbC, we provide tool support for CbC-based program development. We present CorC, a graphical and textual IDE to create programs in a simple while-language following the CbC approach. Starting with a specification, our open source tool supports CbC developers in refining a program by a sequence of refinement steps and in verifying the correctness of these refinement steps using the theorem prover KeY. We evaluated the tool with a set of standard examples on CbC where we reveal errors in the provided specification. The evaluation shows that our tool reduces the verification time in comparison to post-hoc verification.},
  year = {2019},
  journal = {European Joint Conferences on Theory and Practice of Software (ETAPS)},
  pages = {25 - 42},
  month = {06/04 - 11/04},
  publisher = {Springer},
  address = {Switzerland},
  isbn = {78-3-030-16721-9},
  url = {https://link.springer.com/content/pdf/10.1007/978-3-030-16722-6.pdf},
  doi = {https://doi.org/10.1007/978-3-030-16722-6 _ 2},
}
1.
Britz K, Varzinczak I. Preferential tableaux for contextual defeasible ALC. In: 28th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods (TABLEAUX). Springer LNAI no. 11714; 2019. https://www.springer.com/gp/book/9783030290252.

In recent work, we addressed an important limitation in previous ex- tensions of description logics to represent defeasible knowledge, namely the re- striction in the semantics of defeasible concept inclusion to a single preference or- der on objects of the domain. Syntactically, this limitation translates to a context- agnostic notion of defeasible subsumption, which is quite restrictive when it comes to modelling different nuances of defeasibility. Our point of departure in our recent proposal allows for different orderings on the interpretation of roles. This yields a notion of contextual defeasible subsumption, where the context is informed by a role. In the present paper, we extend this work to also provide a proof-theoretic counterpart and associated results. We define a (naïve) tableau- based algorithm for checking preferential consistency of contextual defeasible knowledge bases, a central piece in the definition of other forms of contextual defeasible reasoning over ontologies, notably contextual rational closure.

@{247,
  author = {Katarina Britz and Ivan Varzinczak},
  title = {Preferential tableaux for contextual defeasible ALC},
  abstract = {In recent work, we addressed an important limitation in previous ex- tensions of description logics to represent defeasible knowledge, namely the re- striction in the semantics of defeasible concept inclusion to a single preference or- der on objects of the domain. Syntactically, this limitation translates to a context- agnostic notion of defeasible subsumption, which is quite restrictive when it comes to modelling different nuances of defeasibility. Our point of departure in our recent proposal allows for different orderings on the interpretation of roles. This yields a notion of contextual defeasible subsumption, where the context is informed by a role. In the present paper, we extend this work to also provide a proof-theoretic counterpart and associated results. We define a (naïve) tableau- based algorithm for checking preferential consistency of contextual defeasible knowledge bases, a central piece in the definition of other forms of contextual defeasible reasoning over ontologies, notably contextual rational closure.},
  year = {2019},
  journal = {28th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods (TABLEAUX)},
  pages = {39-57},
  month = {03/09-05/09},
  publisher = {Springer LNAI no. 11714},
  isbn = {ISBN 978-3-030-29026-9},
  url = {https://www.springer.com/gp/book/9783030290252},
}
1.
Britz K, Varzinczak I. Contextual rational closure for defeasible ALC. Annals of Mathematics and Artificial Intelligence. 2019;87(1-2). doi:10.1007/s10472-019-09658-2.

Description logics have been extended in a number of ways to support defeasible reason- ing in the KLM tradition. Such features include preferential or rational defeasible concept inclusion, and defeasible roles in complex concept descriptions. Semantically, defeasible subsumption is obtained by means of a preference order on objects, while defeasible roles are obtained by adding a preference order to role interpretations. In this paper, we address an important limitation in defeasible extensions of description logics, namely the restriction in the semantics of defeasible concept inclusion to a single preference order on objects. We do this by inducing a modular preference order on objects from each modular preference order on roles, and using these to relativise defeasible subsumption. This yields a notion of contextualised rational defeasible subsumption, with contexts described by roles. We also provide a semantic construction for rational closure and a method for its computation, and present a correspondence result between the two.

@article{246,
  author = {Katarina Britz and Ivan Varzinczak},
  title = {Contextual rational closure for defeasible ALC},
  abstract = {Description logics have been extended in a number of ways to support defeasible reason- ing in the KLM tradition. Such features include preferential or rational defeasible concept inclusion, and defeasible roles in complex concept descriptions. Semantically, defeasible subsumption is obtained by means of a preference order on objects, while defeasible roles are obtained by adding a preference order to role interpretations. In this paper, we address an important limitation in defeasible extensions of description logics, namely the restriction in the semantics of defeasible concept inclusion to a single preference order on objects. We do this by inducing a modular preference order on objects from each modular preference order on roles, and using these to relativise defeasible subsumption. This yields a notion of contextualised rational defeasible subsumption, with contexts described by roles. We also provide a semantic construction for rational closure and a method for its computation, and present a correspondence result between the two.},
  year = {2019},
  journal = {Annals of Mathematics and Artificial Intelligence},
  volume = {87},
  pages = {83-108},
  issue = {1-2},
  isbn = {ISSN: 1012-2443},
  url = {https://link.springer.com/article/10.1007/s10472-019-09658-2},
  doi = {10.1007/s10472-019-09658-2},
}
1.
Britz K, Casini G, Meyer T, Varzinczak I. A KLM Perspective on Defeasible Reasoning for Description Logics. In: Description Logic, Theory Combination, and All That. Switzerland: Springer; 2019. doi:https://doi.org/10.1007/978-3-030-22102-7 _ 7.

In this paper we present an approach to defeasible reasoning for the description logic ALC. The results discussed here are based on work done by Kraus, Lehmann and Magidor (KLM) on defeasible conditionals in the propositional case. We consider versions of a preferential semantics for two forms of defeasible subsumption, and link these semantic constructions formally to KLM-style syntactic properties via representation results. In addition to showing that the semantics is appropriate, these results pave the way for more effective decision procedures for defeasible reasoning in description logics. With the semantics of the defeasible version of ALC in place, we turn to the investigation of an appropriate form of defeasible entailment for this enriched version of ALC. This investigation includes an algorithm for the computation of a form of defeasible entailment known as rational closure in the propositional case. Importantly, the algorithm relies completely on classical entailment checks and shows that the computational complexity of reasoning over defeasible ontologies is no worse than that of the underlying classical ALC. Before concluding, we take a brief tour of some existing work on defeasible extensions of ALC that go beyond defeasible subsumption.

@inbook{240,
  author = {Katarina Britz and Giovanni Casini and Tommie Meyer and Ivan Varzinczak},
  title = {A KLM Perspective on Defeasible Reasoning for Description Logics},
  abstract = {In this paper we present an approach to defeasible reasoning for the description logic ALC. The results discussed here are based on work done by Kraus, Lehmann and Magidor (KLM) on defeasible conditionals in the propositional case. We consider versions of a preferential semantics for two forms of defeasible subsumption, and link these semantic constructions formally to KLM-style syntactic properties via representation results. In addition to showing that the semantics is appropriate, these results pave the way for more effective decision procedures for defeasible reasoning in description logics. With the semantics of the defeasible version of ALC in place, we turn to the investigation of an appropriate form of defeasible entailment for this enriched version of ALC. This investigation includes an algorithm for the computation of a form of defeasible entailment known as rational closure in the propositional case. Importantly, the algorithm relies completely on classical entailment checks and shows that the computational complexity of reasoning over defeasible ontologies is no worse than that of the underlying classical ALC. Before concluding, we take a brief tour of some existing work on defeasible extensions of ALC that go beyond defeasible subsumption.},
  year = {2019},
  journal = {Description Logic, Theory Combination, and All That},
  pages = {147–173},
  publisher = {Springer},
  address = {Switzerland},
  isbn = {978-3-030-22101-0},
  url = {https://link.springer.com/book/10.1007%2F978-3-030-22102-7},
  doi = {https://doi.org/10.1007/978-3-030-22102-7 _ 7},
}
1.
Du Toit T, Berndt J, Britz K, Fischer B. ConceptCloud 2.0: Visualisation and exploration of geolocation-rich semi-structured data sets. ICFCA 2019 Conference and Workshops. 2019. http://ceur-ws.org/Vol-2378/.

ConceptCloud is a flexible interactive tool for exploring, vi- sualising, and analysing semi-structured data sets. It uses a combination of an intuitive tag cloud visualisation with an underlying concept lattice to provide a formal structure for navigation through a data set. Con- ceptCloud 2.0 extends the tool with an integrated map view to exploit the geolocation aspect of data. The tool’s implementation of exploratory search does not require prior knowledge of the structure of the data or compromise on scalability, and provides seamless navigation through the tag cloud and the map viewer.

@misc{227,
  author = {Tiaan Du Toit and Joshua Berndt and Katarina Britz and Bernd Fischer},
  title = {ConceptCloud 2.0: Visualisation and exploration of geolocation-rich semi-structured data sets},
  abstract = {ConceptCloud is a flexible interactive tool for exploring, vi- sualising, and analysing semi-structured data sets. It uses a combination of an intuitive tag cloud visualisation with an underlying concept lattice to provide a formal structure for navigation through a data set. Con- ceptCloud 2.0 extends the tool with an integrated map view to exploit the geolocation aspect of data. The tool’s implementation of exploratory search does not require prior knowledge of the structure of the data or compromise on scalability, and provides seamless navigation through the tag cloud and the map viewer.},
  year = {2019},
  journal = {ICFCA 2019 Conference and Workshops},
  month = {06/2019},
  publisher = {CEUR-WS},
  isbn = {1613-0073},
  url = {http://ceur-ws.org/Vol-2378/},
}

2018

1.
Watson B. The impact of using a contract-driven, test-interceptor based software development approach. In: Annual conference of The South African Institute of Computer Scientists and Information Technologists (SAICSIT 2018). New York: ACM; 2018. https://doi.org/10.475/123_4.

A contract-driven development approach requires the formalization of component requirements in the form of a component contract. The Use Case, Responsibility Driven Analysis and Design (URDAD) methodology is based on the contract-driven development approach and uses contracts to capture user requirements and perform a technology-neutral design across layers of granularity. This is achieved by taking use-case based functional requirements through an iterative design process and generating various levels of granularity iteratively. In this project, the component contracts that were captured by utilizing the URDAD approach are used to generate test interceptors which validate whether, in the context of rendering component services, the component contracts are satisfied. To achieve this, Java classes and interfaces are annotated with pre- and postconditions to represent the contracts in code. Annotation processors are then used to automatically generate test-interceptor classes by processing the annotations. The test-interceptor classes encapsulate test-logic and are interface-compatible with their underlying component counterparts. This enable test-interceptors to intercept service requests to the underlying counterpart components in order to verify contract adherence. The generated test interceptors can be used for unit testing as well as real-time component monitoring. This development approach, utilized within the URDAD methodology would then result in unit and integration tests across levels of granularity. Empirical data from actual software development projects will be used to assess the impact of introducing such a development approach in real software development projects. In particular, the study assesses the impact on the quality attributes of the software development process, as well as the qualities of the software produced by the process. Process qualities measured include development productivity (the rate at which software is produced), correctness (the rate at which the produced software meets the clients requirements) and the certifiability of the software development process (which certifiability requirements are fully or partially addressed by the URDAD development approach). Software qualities measured include reusability (empirical and qualitative), simplicity (the inverse of the complexity measure) and bug density (number of defects in a module). The study aims to show conclusively how the approach impacts the creation of correct software which meets the client requirements, how productivity is affected and if the approach enhances or hinders certifiability. The study also aims to determine if testinterceptors are a viable mechanism to produce high-quality tests that contribute to the creation of correct software. Furthermore, the study aims to determine if the software produced by applying this approach yield improved reusability or not, if the software becomes more or less complex and if more or less bugs are induced.

@{211,
  author = {Bruce Watson},
  title = {The impact of using a contract-driven, test-interceptor based software development approach},
  abstract = {A contract-driven development approach requires the formalization of component requirements in the form of a component contract. The Use Case, Responsibility Driven Analysis and Design (URDAD) methodology is based on the contract-driven development approach and uses contracts to capture user requirements and perform a technology-neutral design across layers of granularity. This is achieved by taking use-case based functional requirements through an iterative design process and generating various levels of granularity iteratively. 
In this project, the component contracts that were captured by utilizing the URDAD approach are used to generate test interceptors which validate whether, in the context of rendering component services, the component contracts are satisfied. To achieve this, Java classes and interfaces are annotated with pre- and postconditions to represent the contracts in code. Annotation processors are then used to automatically generate test-interceptor classes by processing the annotations. The test-interceptor classes encapsulate test-logic and are interface-compatible with their underlying component counterparts. This enable test-interceptors to intercept service requests to the underlying counterpart components in order to verify contract adherence. The generated test interceptors can be used for unit testing as well as real-time component monitoring. This development approach, utilized within the URDAD methodology would then result in unit and integration tests across levels of granularity. 
Empirical data from actual software development projects will be used to assess the impact of introducing such a development approach in real software development projects. In particular, the study assesses the impact on the quality attributes of the software development process, as well as the qualities of the software produced by the process.
Process qualities measured include development productivity (the rate at which software is produced), correctness (the rate at which the produced software meets the clients requirements) and the certifiability of the software  development process (which certifiability requirements are fully or partially addressed by the URDAD development approach). Software qualities measured include reusability (empirical and qualitative), simplicity (the inverse of the complexity measure) and bug density (number of defects in a module). 
The study aims to show conclusively how the approach impacts the creation of correct software which meets the client requirements, how productivity is affected and if the approach enhances or hinders certifiability. The study also aims to determine if testinterceptors
are a viable mechanism to produce high-quality tests that contribute to the creation of correct software. Furthermore, the study aims to determine if the software produced by applying this approach yield improved reusability or not, if the software becomes
more or less complex and if more or less bugs are induced.},
  year = {2018},
  journal = {Annual conference of The South African Institute of Computer Scientists and Information Technologists (SAICSIT 2018)},
  pages = {322-326},
  month = {26/09-28/09},
  publisher = {ACM},
  address = {New York},
  isbn = {123-4567-24-567/08/06},
  url = {https://doi.org/10.475/123_4},
}
1.
Watson B. Three Strategies for the Dead-Zone String Matching Algorithm. In: The Prague Stringology Conference. Prague, Czech Republic: Prague Stringology Club; 2018. http://www.stringology.org/.

No Abstract

@{210,
  author = {Bruce Watson},
  title = {Three Strategies for the Dead-Zone String Matching Algorithm},
  abstract = {No Abstract},
  year = {2018},
  journal = {The Prague Stringology Conference},
  pages = {117-128},
  month = {27/08-28/08},
  publisher = {Prague Stringology Club},
  address = {Prague, Czech Republic},
  isbn = {978-80-01-06484-9},
  url = {http://www.stringology.org/},
}
1.
Watson B. Modelling the sensory space of varietal wines: Mining of large, unstructured text data and visualisation of style patterns. Scientific Reports. 2018;8(4987). www.nature.com/scientificreports.

No Abstract

@article{209,
  author = {Bruce Watson},
  title = {Modelling the sensory space of varietal wines: Mining of large, unstructured text data and visualisation of style patterns},
  abstract = {No Abstract},
  year = {2018},
  journal = {Scientific Reports},
  volume = {8},
  pages = {1-13},
  issue = {4987},
  publisher = {Springer Nature},
  url = {www.nature.com/scientificreports},
}
1.
Watson B. Using CSP to Develop Quality Concurrent Software. In: Principled Software Development. Switzerland: Springer; 2018. https://doi.org/10.1007/978-3-319-98047-8.

A method for developing concurrent software is advocated that centres on using CSP to specify the behaviour of the system. A small example problem is used to illustrate the method. The problem is to develop a simulation system that keeps track of and reports on the least unique bid of multiple streams of randomly generated incoming bids. The problem’s required high-level behaviour is specified in CSP, refined down to the level of interacting processes and then verified for refinement and behavioural correctness using the FDR refinement checker. Heuristics are used to map the CSP processes to a GO implementation. Interpretive reflections are offered of the lessons learned as a result of the exercise.

@inbook{208,
  author = {Bruce Watson},
  title = {Using CSP to Develop Quality Concurrent Software},
  abstract = {A method for developing concurrent software is advocated that centres
on using CSP to specify the behaviour of the system. A small example problem
is used to illustrate the method. The problem is to develop a simulation system
that keeps track of and reports on the least unique bid of multiple streams of
randomly generated incoming bids. The problem’s required high-level behaviour is
specified in CSP, refined down to the level of interacting processes and then verified
for refinement and behavioural correctness using the FDR refinement checker.
Heuristics are used to map the CSP processes to a GO implementation. Interpretive
reflections are offered of the lessons learned as a result of the exercise.},
  year = {2018},
  journal = {Principled Software Development},
  pages = {165-184},
  publisher = {Springer},
  address = {Switzerland},
  isbn = {978-3-319-98046-1},
  url = {https://doi.org/10.1007/978-3-319-98047-8},
}
1.
Watson B. Using CSP to Develop Quality Concurrent Software. In: Switzerland; 2018. https://doi.org/10.1007/978-3-319-98047-8.

No Abstract

@{203,
  author = {Bruce Watson},
  title = {Using CSP to Develop Quality Concurrent Software},
  abstract = {No Abstract},
  year = {2018},
  address = {Switzerland},
  isbn = {978-3-319-98046-1},
  url = {https://doi.org/10.1007/978-3-319-98047-8},
}
1.
Pretorius A, Kroon S, Kamper H. Learning Dynamics of Linear Denoising Autoencoders. In: 35th International Conference on Machine Learning. Sweden: Proceedings of Machine Learning Research (PMLR); 2018.

Denoising autoencoders (DAEs) have proven useful for unsupervised representation learning, but a thorough theoretical understanding is still lacking of how the input noise influences learning. Here we develop theory for how noise influences learning in DAEs. By focusing on linear DAEs, we are able to derive analytic expressions that exactly describe their learning dynamics. We verify our theoretical predictions with simulations as well as experiments on MNIST and CIFAR-10. The theory illustrates how, when tuned correctly, noise allows DAEs to ignore low variance directions in the inputs while learning to reconstruct them. Furthermore, in a comparison of the learning dynamics of DAEs to standard regularised autoencoders, we show that noise has a similar regularisation effect to weight decay, but with faster training dynamics. We also show that our theoretical predictions approximate learning dynamics on real-world data and qualitatively match observed dynamics in nonlinear DAEs.

@{202,
  author = {Arnu Pretorius and Steve Kroon and H. Kamper},
  title = {Learning Dynamics of Linear Denoising Autoencoders},
  abstract = {Denoising autoencoders (DAEs) have proven useful for unsupervised representation learning, but a thorough theoretical understanding is still lacking of how the input noise influences learning. Here we develop theory for how noise influences learning in DAEs. By focusing on linear DAEs, we are able to derive analytic expressions that exactly describe their learning dynamics. We verify our theoretical predictions with simulations as well as experiments on MNIST and CIFAR-10. The theory illustrates how, when tuned correctly, noise allows DAEs to ignore low variance directions in the inputs while learning to reconstruct them. Furthermore, in a comparison of the learning dynamics of DAEs to standard regularised autoencoders, we show that noise has a similar regularisation effect to weight decay, but with faster training dynamics. We also show that our theoretical predictions approximate learning dynamics on real-world data and qualitatively match observed dynamics in nonlinear DAEs.},
  year = {2018},
  journal = {35th International Conference on Machine Learning},
  pages = {4141-4150},
  month = {10/07-15/07},
  publisher = {Proceedings of Machine Learning Research (PMLR)},
  address = {Sweden},
  isbn = {1938-7228},
}
1.
Berglund M, Drewes F, van der Merwe B. The Output Size Problem for String-to-Tree Transducers. Journal of Automata, Languages and Combinatorics. 2018;23(1). https://www.jalc.de/issues/2018/issue_23_1-3/jalc-2018-019-038.php.

The output size problem, for a string-to-tree transducer, is to determine the asymptotic behavior of the function describing the maximum size of output trees, with respect to the length of input strings. We show that the problem to determine, for a given regular expression, the worst-case matching time of a backtracking regular expression matcher, can be reduced to the output size problem. The latter can, in turn, be solved by determining the degree of ambiguity of a non-deterministic finite automaton. Keywords: string-to-tree transducers, output size, backtracking regular expression matchers, NFA ambiguity

@article{201,
  author = {Martin Berglund and F. Drewes and Brink van der Merwe},
  title = {The Output Size Problem for String-to-Tree Transducers},
  abstract = {The output size problem, for a string-to-tree transducer, is to determine the asymptotic behavior of the function describing the maximum size of output trees, with respect to the length of input strings. We show that the problem to determine, for a given regular expression, the worst-case matching time of a backtracking regular expression matcher, can be reduced to the output size problem. The latter can, in turn, be solved by determining the degree of ambiguity of a non-deterministic finite automaton. 
Keywords: string-to-tree transducers, output size, backtracking regular expression matchers, NFA ambiguity},
  year = {2018},
  journal = {Journal of Automata, Languages and Combinatorics},
  volume = {23},
  pages = {19-38},
  issue = {1},
  publisher = {Institut für Informatik, Justus-Liebig-Universität Giessen},
  address = {Germany},
  isbn = {2567-3785},
  url = {https://www.jalc.de/issues/2018/issue_23_1-3/jalc-2018-019-038.php},
}
1.
Berglund M, Drewes F, van der Merwe B. On Regular Expressions with Backreferences and Transducers. In: 10th Workshop on Non-Classical Models of Automata and Applications (NCMA 2018). ; 2018.

Modern regular expression matching software features many extensions, some general while some are very narrowly speci ed. Here we consider the generalization of adding a class of operators which can be described by, e.g. nite-state transducers. Combined with backreferences they enable new classes of languages to be matched. The addition of nite-state transducers is shown to make membership testing undecidable. Following this result, we study the complexity of membership testing for various restricted cases of the model.

@{199,
  author = {Martin Berglund and F. Drewes and Brink van der Merwe},
  title = {On Regular Expressions with Backreferences and Transducers},
  abstract = {Modern regular expression matching software features many extensions, some general while some are very narrowly specied. Here we consider the generalization of adding a class of operators which can be described by, e.g. nite-state transducers. Combined with backreferences they enable new classes of languages to be matched. The addition of nite-state transducers is shown to make membership testing undecidable. Following this result, we study the complexity of membership testing for various restricted cases of the model.},
  year = {2018},
  journal = {10th Workshop on Non-Classical Models of Automata and Applications (NCMA 2018)},
  pages = {1-19},
  month = {21/08-22/08},
}
  • CSIR
  • DSI
  • Covid-19