André Golliez, Cecile Aschwanden, Claudia Bretscher, Abraham Bernstein, Peter Farago, Sybil Krügel, Felix Frei, Bruno Bucher, Alessia Neuroni, Reinhard Riedl, Open Government Data Studie Schweiz, Berner Fachhochschule, Bern, 2012. (Book/Research Monograph)
|
|
Patrick Minder, Abraham Bernstein, CrowdLang: programming human computation systems, Version: 2, 2012-01-01. (Technical Report)
Today, human computation systems are mostly used for batch processing large amount of data in a variety of tasks (e.g., image labeling or optical character recognition) and, often, the applications are the result of extensive lengthy trial-and-error refinements.
A plethora of tasks, however, cannot be captured in this paradigm and as we move to more sophisticated problem solving, we will need to rethink the way in which we coordinate networked humans and computers involved in a task. What we lack is an approach to engineer solutions based on past successful patterns.
In this paper we present the programming language and framework CrowdLang for engineering complex computation systems incorporating large numbers of networked humans and machines agents incorporating a library of known successful interaction patterns. CrowdLang allows to design complex problem solving tasks that combine large numbers of human and machine actors whilst incorporating known successful patterns.
We evaluate CrowdLang by programming a text translation task using a variety of different known human-computation patterns. The evaluation shows that CrowdLang is able to simply explore a large design space of possible problem solving programs with the simple variation of the used abstractions.
In an experiment involving 1918 different human actors we, furthermore, show that a mixed human-machine translation significantly outperforms a pure machine translation in terms of adequacy and fluency whilst translating more than 30 pages per hour and that the mixed translation approximates the human-translated gold-standard to 75% using the automatic evaluation metric METEOR. Last but not least, our evaluation illustrates that a new human-computation pattern, which we call staged-contest with pruning, outperforms all other refinements in the translation task. |
|
Mei Wang, Abraham Bernstein, Marc Chesney, An experimental study on real options strategies, Quantitative Finance, Vol. 12 (11), 2012. (Journal Article)
We conduct a laboratory experiment to study whether people intuitively use real-option strategies in a dynamic investment setting. The participants were asked to play as an oil manager and make production decisions in response to a simulated mean-reverting oil price. Using cluster analysis, participants can be classified into four groups, which we label as “mean-reverting,” “Brownian motion real-option,” “Brownian motion myopic real-option,” and “ambiguous.” We find two behavioral biases in the strategies by our participants: ignoring the mean-reverting process, and myopic behavior. Both lead to too frequent switches when compared with the theoretical benchmark. We also find that the last group behaved as if they have learned to incorporate the true underlying process into their decisions, and improved their decisions during the later stage. |
|
Amancio Bouza, Hypothesis-based collaborative filtering: retrieving like-minded individuals based on the comparison of hypothesized preferences, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Dissertation)
The vast product variety and product variation offered by online retailers provide an amazing amount of choice options to individuals, thus posing a big challenge to them finding and choosing interesting products which provide them the most utility. Consequently, consumers have to be satisfied with finding a product that provides them sufficient utility. Beyond that, individuals tend to even defer product choice. Recommender systems have emerged in the past years as an effective method to help individuals with finding interesting products. As a result, the consumer welfare enhanced by $731 million to $1.03 billion in the year 2000 due to the increased product variety of online bookstores. Consumer welfare refers to consumers’ total satisfaction. This enhancement in consumer welfare is 7 to 10 times larger than the consumer welfare gain from increased competition and lower prices in the book market. In other words, recommender systems are essential for increasing consumer welfare, which ultimately leads to an increase of economic and social welfare. Typically, recommender systems use the collective wisdom of individuals for exposing individuals to products which best fits their preferences, thus maximizing their utility. More precisely, the product ratings of like-minded individuals are considered by the recommender system to provide individuals recommendations. Commonly, like-minded individuals are retrieved by comparing their ratings for common rated products. This filtering technology is commonly referred to as collaborative filtering. However, retrieving like-minded individuals based on their ratings for common rated products may be inappropriate because common rated products may not necessarily be a representative sample of two individuals’ preferences being compared. There are four reasons. Firstly, the set of common rated products is too sparse to draw a significant conclusion about the preference similarity of both individuals. Secondly, ratings for common rated products correspond to the intersection of two individuals’ rated products and thus may represent only partially both individuals’ preferences. Consequently, overall preference similarity is, in fact, deduced from partial preference similarity. Thirdly, the preference similarity between two individuals is not assessable in the case when both individuals do not share ratings for the same products. Consequently, like-minded individuals are missed due to lack of ratings. Lastly, retailers collect only a fraction of individuals’ ratings on their store, because individuals purchase products from different stores. Hence, individuals’ ratings are distributed across multiple retailers, which limits the set of common rated products per retailer. In this dissertation, we propose hypothesis-based collaborative filtering (HCF) to expose individuals to products that best fits their preferences. In HCF, like-minded individuals are retrieved based on the similarity of their respective hypothesized preferences by means of machine learning algorithms hypothesizing individuals’ preferences. Machine learning is a method to extract patterns to generalize from observations, thus being adequate to hypothesize individuals’ preferences from their product ratings. Generally, the similarity of two individuals’ hypothesized preferences can be computed in two different ways. One way is to compare the hypothesized utilities that products provide to both individuals. To this goal, we use both individuals’ hypothesized preferences to predict the utilities of some products. To compute the preference similarity, we propose three similarity metrics to compare product utilities. The other way is to analyze the composition of both individuals’ hypothesized preferences. For this purpose, we introduce the notion of hypothesized partial preferences (HPPs), which are self-contained and form the components which constitute hypothesized preferences. We propose several methods to compare HPPs to compute the similarity of two individuals’ preferences. We conduct a large empirical study on a quasi benchmark dataset and diverse variation of this dataset, which vary by means of sparsity degree, to evaluate the cold-start behavior of HCF. Based on this empirical study, we provide empirical evidence for the robustness of HCF against data sparsity and the superiority to state-of-the-art collaborative filtering methods. We use the research methodology of grounded theory to scrutinize the empirical results to explain the cold-start behavior of HCF for retrieving like-minded individuals relative to other collaborative filtering methods. Based on this theory, we show that HCF is more efficient in retrieving like-minded individuals from large sets of individuals and is more appropriate for individuals who provide few provide ratings. We verify the validity of the grounded theory by means of an empirical study. In conclusion, HCF provides individuals better recommendations, particularly for those who provide few ratings and for frequently rated products, which complicates the retrieval of like-minded individuals. Hence, HCF increases consumers welfare, which ultimately leads to an increase of economic and social welfare.
Die überwältigende Produktvielfalt und Produktvariation, welche von Online-Händlern angeboten werden, bietet Individuen eine unglaubliche Menge an Wahlmöglichkeiten. Dies stellt jedoch eine grosse Herausforderung für Individuen dar, die aus dieser Auswahl diejenigen Produkte finden möchten, welche ihnen den höchsten Nutzen bringen. Angesichts eines solchen überdimensionalen Sortiments sind Individuen kaum in der Lage diese Produkte zu finden. Folglich müssen sich Individuen in der Regel mit Produkten zu frieden geben, welche ihnen genügend hohen Nutzen bringen. Nicht zu letzt tendieren Individuen gar dazu kein Produkt auszuwählen und setzen ihre Entscheidung aus. Empfehlungssysteme haben sich in den vergangenen Jahren entwickelt und als effektive Methode erwiesen, um Individuen bei der Suche nach interessanten Produkten zu helfen. Damit konnte sich die Konsumentenwohlfahrt um $731 Millionen auf $1.03 Milliarden im Jahr 2000 erhöhen. Dies alleine aufgrund der höheren Produktvielfalt in Online-Buchhandlungen. Die Konsumentenwohlfahrt bezieht sich auf die totale Konsumentenzufriedenheit. Diese Wohlfahrtserhöhung ist sieben bis zehnmal grösser als die erhöhte Wohlfahrt, welche durch verstärkten Wettbewerb und tieferen Preisen resultiert. Mit anderen Worten, Empfehlungssysteme sind wesentlich für die Steigerung der Konsumentenwohlfahrt, welches letztlich zu einer Steigerung des wirtschaftlichen und öffentlichen Wohlstandes führt. Empfehlungssysteme verwenden typischerweise die kollektive Weisheit der Massen, um Individuen diejenigen Produkte zu zeigen, welche am Besten ihren Präferenzen entsprechen und damit ihren Nutzen erhöhen. Dazu werden nur die Produktbewertungen von Individuen berücksichtigt, welche ähnliche Präferenzen haben. Allgemein werden Individuen mit ähnlichen Präferenzen durch einen Vergleich ihrer Bewertungen für die selben Produkte festgestellt. Diese Filter-Technologie wird gemeinhin als kollaboratives Filtern bezeichnet. Jedoch ist das finden von Individuen mit ähnlichen Präferenzen basie- rend auf ihren Bewertungen für die selben Produkte nicht immer geeignet, da diese Produktbewertungen nicht notwendigerweise repräsentativ für ihre Präferenzen sind. Dafür gibt es vier Gründe. Erstens, die Menge der gemeinsam bewerteten Produkte ist zu klein, um einen signifikanten Rückschluss der Präferenzähnlichkeit zweier Individuen festzustellen. Zweitens, die Bewertungen für gemeinsam bewertete Produkte entsprechen der Produktschnittmenge zweier Individuen. Somit ist es möglich, dass diese gemeinsam bewerteten Produkte nur teilweise beide Präferenzen repräsentieren. Drittens, die Präferenzähnlichkeit kann nicht festgestellt werden, wenn zwei Individuen keine gleichen Produkte bewertet haben. Daraus folgt, dass Individuen mit ähnlichen Präferenzen nicht erkannt werden aufgrund fehlender Bewertungen für gleiche Produkte. Viertens, Händler können nur einen Teil der Bewertungen von Individuen auf ihren Online-Shops sammeln, da Individuen üblicherweise Produkte von verschiedenen Händlern kaufen. Somit sind die Bewertungen von Individuen über verschiedene Händler verteilt, welche die mögliche Menge von gemeinsam bewerteten Produkten pro Händler limitiert. In dieser Dissertation schlagen wir deshalb Hypothesen-basiertes kollaboratives Filtern (HCF) vor, um Individuen an Produkte heranzuführen, welche am Besten ihren Präferenzen entsprechen. Bei HCF werden Individuen mit ähnlichen Präferenzen aufgrund der Ähnlichkeit ihrer hypothetischer Präferenzen, welche mittels Algorithmen für maschinelles Lernen erzeugt werden, erkannt. Maschinelles Lernen ist ein Verfahren, um Muster aus Beobachtungen zu erkennen. Dadurch eignet es sich, um die Präferenzen von Individuen basierend auf ihren Produktbewertungen zu hypothetisieren. Es gibt zwei verschiedene Möglichkeiten, um die Ähnlichkeit von hypothetischen Präferenzen zu berechnen. Eine Möglichkeit ist der Vergleich des hypothetischen Nutzens, welche Produkte zweien Individuen bringt. Zu diesem Zweck verwenden wir die hypothetischen Präferenzen, um den Nutzen von Produkten für beide Individuen vorherzusagen. Wir stellen drei verschiedene Ähnlichkeitsmetriken vor, um diese Produktnutzen zu vergleichen und die Ähnlichkeit zu berechnen. Die andere Möglichkeit ist die Analyse der Komposition der hypothetischen Präferenzen beider Individuen. Zu diesem Zwecken führen wir den Begriff der partiellen Präferenzen ein, welche die Komponenten von hypothetischen Präferenzen bilden. Wir stellen mehrere Verfahren vor, um hypothetische partielle Präferenzen zu Vergleichen und damit die Ähnlichkeit zweier hypothetischen Präferenzen zu berechnen. Wir führen eine grosse empirische Studie durch basierend auf einem Quasi-Benchmark Datensatz und verschiedener darauf basierenden Variationen, welche bezüglich der Menge an Produktbewertungen variieren. Damit evaluieren wir die Empfehlungsqualität des HCF bezüglich der Spärlichkeit an Produktbewertungen, was auch als Kalt-Start Problem bezeichnet wird. Basierend auf dieser Studie können wir empirische Evidenz zeigen, dass HCF robust gegenüber der Spärlichkeit von Produktbewertung ist und State-of-the-Art Methoden des kollaborativen Filterns überlegen ist. Wir verwenden die Forschungsmethodik Grounded Theory, um diese empirischen Resulte zu untersuchen und dadurch das Verhalten von HCF im Vergleich zu anderen kollaborativen Filter-Methoden zu verstehen und zu erklären. Wir zeigen basierend auf dieser Theorie, dass HCF im Vergleich zu anderen Methoden effizienter Individuen mit ähnlichem Geschmack aus einer grossen Menge potentieller Kandidaten filtert. Zudem zeigen wir, dass HCF insbesondere für Individuen, welche wenige Produkte bewertet haben, geeigneter ist als andere Methoden. Wir verifzieren die Gültigkeit dieser Theorie mittels einer weiteren empirischen Studie. Zusammenfassend bietet HCF Individuen bessere Empfehlungen, insbesondere für Individuen, welche wenige Produkte bewertet haben. Dadurch kann die Konsumentenwohlfahrt weiter erhöht werden und führt somit zu einer Erhöhung der ökonomischen Wohlfahrt. |
|
Roman Studer, Temporal RDF processing system, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Bachelor's Thesis)
This thesis describes the concept, implementation and evaluation of a temporal extension to the triple store RDFBox. The most important goal was to compare two new temporal index structures and integrate them into RDFBox. Foremost attention has been paid on evaluating the concepts of those two indices and the performance compared with each other and not with the existing sys- tem. In the first section an overview of the existing system is given, followed by an introduction to the changes the temporal extension brings with it. The third and most important part of the thesis shows the evaluation of the indices and possible enhancements. |
|
Manuel Gugger, Clustering high-dimensional sparse data, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2012. (Bachelor's Thesis)
This work is a practical approach on evaluating clustering algorithms on different datasets to examine their behaviour on high-dimensional and sparse datasets. High-Dimensionality and sparsity poses high demands on the algorithms due to missing values and computational requirements. It has already been proven that algorithms perform significantly worse under high-dimensional and sparse data. Here approaches to circumvent these difficulties are analysed. Distance matrices and recommender systems have been examined to either reduce the complexity or to impute missing data. A special focus is then put on the similarity between clustering solutions with the goal of finding a similar behaviour. The emphasis lies on getting flexible results instead of highly tweaking certain algorithms as the problem can not be solemnly reduced to the mathematical performance due to missing values. Generally good and flexible results have been achieved with a combination of content-based-filtering and hierarchical clustering methods or the affinity propagation algorithm. Kernel based clustering results differed much from other methods and were sensitive to changes on the input data. |
|
Abraham Bernstein, Mark Klein, Thomas W Malone, Programming the global brain, Communications of the ACM, Vol. 55 (5), 2012. (Journal Article)
|
|
Jayalath Ekanayake, Jonas Tappolet, Harald C Gall, Abraham Bernstein, Time variance and defect prediction in software projects, Empirical Software Engineering, Vol. 17 (4-5), 2012. (Journal Article)
It is crucial for a software manager to know whether or not one can rely on a bug prediction model. A wrong prediction of the number or the location of future bugs can lead to problems in the achievement of a project’s goals. In this paper we first verify the existence of variability in a bug prediction model’s accuracy over time both visually and statistically. Furthermore, we explore the reasons for such a highvariability over time, which includes periods of stability and variability of prediction quality, and formulate a decision procedure for evaluating prediction models before applying them. To exemplify our findings we use data from four open source projects and empirically identify various project features that influence the defect prediction quality. Specifically, we observed that a change in the number of authors editing a file and the number of defects fixed by them influence the prediction quality. Finally, we introduce an approach to estimate the accuracy of prediction models that helps a project manager decide when to rely on a prediction model. Our findings suggest that one should be aware of the periods of stability and variability of prediction quality and should use approaches such as ours to assess their models’ accuracy in advance. |
|
Patrick Minder, Abraham Bernstein, Social network aggregation using face-recognition, In: ISWC 2011 Workshop: Social Data on the Web, RWTH Aachen, Bonn, Germany, 2011-10-23. (Conference or Workshop Paper published in Proceedings)
With the rapid growth of the social web an increasing number of people started to replicate their off-line preferences and lives in an on-line environment. Consequently, the social web provides an enormous source for social network data, which can be used in both commercial and research applications. However, people often take part in multiple social network sites and, generally, they share only a selected amount of data to the audience of a specific platform. Consequently, the interlinkage of social graphs from different sources getting increasingly important for applications such as social network analysis, personalization, or recommender systems. This paper proposes a novel method to enhance available user re-identification systems for social network data aggregation based on face-recognition algorithms. Furthermore, the method is combined with traditional text-based approaches in order to attempt a counter-balancing of the weaknesses of both methods. Using two samples of real-world social networks (with 1610 and 1690 identities each) we show that even though a pure face-recognition based method gets outperformed by the traditional text-based method (area under the ROC curve 0.986 vs. 0.938) the combined method significantly outperforms both of these (0.998, p = 0.0001) suggesting that the face-based method indeed carries complimentary information to raw text attributes. |
|
Iris Helming, Abraham Bernstein, Rolf Grütter, Setphan Vock, Making close to suitable for web search: A comparison of two approaches, In: Terra Cognita - Foundations, Technologies and Applications of the Geospatial Web, Bonn, germany, 2011-10-23. (Conference or Workshop Paper published in Proceedings)
In this paper we compare two approaches to model the vague german spatial relation in der Na ?he von (English: ”close to”) to enable its usage in (semantic) web searches. A user wants, for example, to find all relevant documents regarding parks or forestal landscapes close to a city. The problem is that there are no clear metric distance limits for possibly matching places because they are only restricted via the vague natural language expression. And since human perception does not work only in distances we can’t handle the queries simply with metric dis- tances. Our first approach models the meaning of these expressions in description logics using relations of the Region Connection Calculus. A formalism has been developed to find all instances that are potentially perceived as close to. The second approach deals with the idea that ev- erything that can be reached in a reasonable amount of time with a given means of transport (e.g. car) is potentially perceived as close. This ap- proach uses route calculations with a route planner. The first approach has already been evaluated. The second is still under development. But we can already show a correlation between what people consider as close to and time needed to get there. |
|
Christoph Kiefer, Abraham Bernstein, Application and evaluation of inductive reasoning methods for the semantic web and software analysis, In: Reasoning Web. Semantic Technologies for the Web of Data - 7th International Summer School 2011, Springer, 2011, 2011-08-23. (Conference or Workshop Paper published in Proceedings)
Exploiting the complex structure of relational data enables to build better models by taking into account the additional information provided by the links between objects. We extend this idea to the Semantic Web by introducing our novel SPARQL-ML approach to perform data mining for Semantic Web data. Our approach is based on traditional SPARQL and statistical relational learning methods, such as Relational Probability Trees and Relational Bayesian Classifiers. We analyze our approach thoroughly conducting four sets of experiments on synthetic as well as real-world data sets. Our analytical results show that our ap- proach can be used for almost any Semantic Web data set to perform instance-based learning and classification. A comparison to kernel methods used in Support Vector Machines even shows that our approach is superior in terms of classification accuracy. |
|
Rolf Grütter, Iris Helming, Simon Speich, Abraham Bernstein, Rewriting queries for web searches that use local expressions, In: 5th International Symposium on Rules (RuleML 2011), Springer, Barcelona, Spain, 2011-07-19. (Conference or Workshop Paper published in Proceedings)
Users often enter a local expression to constrain a web search to ageographical place. Current search engines’ capability to deal with expressionssuch as “close to” is, however, limited. This paper presents an approach thatuses topological background knowledge to rewrite queries containing localexpressions in a format better suited to standard search engines. To formalizelocal expressions, the Region Connection Calculus (RCC) is extended byadditional relations, which are related to existing ones by means of compositionrules. The approach is applied to web searches for communities in a part ofSwitzerland which are “close to” a reference place. Results show that queryrewriting significantly improves recall of the searches. When dealing withapprox. 30,000 role assertions, the time required to rewrite queries is in therange of a few seconds. Ways of dealing with a possible decrease ofperformance when operating on a larger knowledge base are discussed. |
|
Katharina Reinecke, Patrick Minder, Abraham Bernstein, MOCCA - A system that learns and recommends visual preferences based on cultural similarity, In: 16th International Conference on Intelligent User Interfaces (IUI), ACM, Lisbon, Portugal, 2011-02-13. (Conference or Workshop Paper published in Proceedings)
We demonstrate our culturally adaptive system MOCCA, which is able to automatically adapt its visual appearance to the user's national culture. Rather than only adapting to one nationality, MOCCA takes into account a person's current and previous countries of residences, and uses this information to calculate user-specific preferences. In addition, the system is able to learn new, and refine existing adaptation rules from users' manual modifications of the user interface based on a collaborative filtering mechanism, and from observing the user's interaction with the interface. |
|
Markus Christen, Die Entstehung der Hirn-Computer-Analogie. Tücken und Fallstricke bei der Technisierung des Gehirns, In: Die Zukunft des menschlichen Gehirns : ethische und anthropologische Herausforderung der modernen Neurowissenschaften, Institut für Kirche und Gesellschaft, Schwerte, p. 135 - 154, 2011. (Book Chapter)
|
|
Ziwei Yang, Shen Gao, Jianliang Xu, Byron Choi, Authentication of range query results in mapreduce environments, In: Proceedings of the third international workshop on Cloud data management, ACM, New York, NY, USA, 2011. (Conference or Workshop Paper published in Proceedings)
|
|
Shen Gao, Jianliang Xu, Bingsheng He, Byron Choi, Haibo Hu, PCMLogging: reducing transaction logging overhead with PCM, In: 20th ACM international conference on Information and knowledge management, ACM, New York, NY, USA, 2011-01-01. (Conference or Workshop Paper published in Proceedings)
|
|
Jonas Tappolet, Managing Temporal Graph Data While Preserving Semantics, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Dissertation)
This thesis investigates the introduction of time as a first-class citizen to RDF-based knowledge bases as used by the Linked Data movement. By presenting EvoOnt, a use-case scenario from the field of software comprehension we demonstrate a particular field that (1) benefits from the Semantic Web’s tools and techniques, (2) has a high update rate and (3) is a candidate-dataset for Linked Data. EvoOnt is a set of OWL ontologies that cover three aspects of the software development process: A source code ontology that abstracts the elements of object-oriented code, a defect tracker ontology that models the contents of a defect database (a.k.a. bug tracker) and finally a version ontology that allows the expression of multiple versions of a source code file. In multiple experiment we demonstrate how Semantic Web tools and techniques can be used to perform common tasks known from software comprehension. Derived from this use case we show how the temporal dimension can be leveraged in RDF data. Firstly, we present a representation format for the annotation of RDF triples with temporal validity intervals. We propose a special usage of named graphs in order to encode temporal triples. Secondly, we demonstrate how such a knowledge base can be queried using a temporal syntax extension of the SPARQL query language. Next, we present two indexing structures that speed up the processing and querying time of temporally annotated data. Furthermore, we demonstrate how additional knowledge can be extracted from the temporal dimension by matching patterns that contain temporal constraints. All those elements put together outlines a method that can be used to make the datasets published as Linked Data more robust to possible invalidations through updates of liked datasets. Additionally, processing and querying can be improved through sophisticated index structures while deriving additional information from the history of a dataset. |
|
Raphael Ochsenbein, The influence of online trust across cultural borders: research project, 2011-01-01. (Other Publication)
This thesis describes the mediating role of online trust on the behaviour of people on the Internet. Building on current literature on the topic, the definition of online trust is extended in order to account for cultural influences on trust. After laying out the theoretical foundations, the implementation of a browser extension, that could serve as an instrument to measure online trust, is given. In the end, a review of the used literature is provided and the limitations of the extension are discussed. |
|
Mengia Zollinger, OGD ZH - a prototype implementation, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Bachelor's Thesis)
This thesis describes the implementation of a prototype OGD for the city of Zurich. The main focus was the achievement of a data catalogue and several apps for example data visualization.
At the beginning, an overview of the procedure and the used framework are introduced, followed by the explanation of the implementation and the resulted challenges. The thesis ends with a comparison of the prototype with similar projects of different countries and with another framework.
It is shown that an OGD ZH is possible, but that there are still unsolved issues such as the realization of version control, multilingualism and the automatic generation and assignment of metadata. |
|
Thomas Scharrenbach, End-user assisted ontology evolution in uncertain domains, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2011. (Dissertation)
|
|