IP Address Token

$K6E2GKeH6X

Recent Comments

Anonymous
3yr

I’m sure you have seen this interesting rift between programmers that goes beyond the friendly competition about who favors which IDE or which programming language has the nicer syntax — the rift that extends to the very core of how we navigate the systems in front of us.

In essence, there are two kinds of people when it comes to computer navigation: Those who rely on the mouse and can’t understand why anyone would rather type text and on the other side, there’s us few who have seen the light and prefer using the keyboard as much as possible.

Anonymous
3yr

Deep neural network (DNN) is an indispensable machine learning tool for achieving human-level performance on many learning tasks. Yet, due to its black-box nature, it is inherently difficult to understand which aspects of the input data drive the decisions of the network. There are various real-world scenarios in which humans need to make actionable decisions based on the output DNNs. Such decision support systems can be found in critical domains, such as legislation, law enforcement, etc. It is important that the humans making high-level decisions can be sure that the DNN decisions are driven by combinations of data features that are appropriate in the context of the deployment of the decision support system and that the decisions made are legally or ethically defensible. Due to the incredible pace at which DNN technology is being developed, the development of new methods and studies on explaining the decision-making process of DNNs has blossomed into an active research field. A practitioner beginning to study explainable deep learning may be intimidated by the plethora of orthogonal directions the field is taking. This complexity is further exacerbated by the general confusion that exists in defining what it means to be able to explain the actions of a deep learning system and to evaluate a system's "ability to explain". To alleviate this problem, this article offers a "field guide" to deep learning explainability for those uninitiated in the field. The field guide: i) Discusses the traits of a deep learning system that researchers enhance in explainability research, ii) places explainability in the context of other related deep learning research areas, and iii) introduces three simple dimensions defining the space of foundational methods that contribute to explainable deep learning. The guide is designed as an easy-to-digest starting point for those just embarking in the field.

Anonymous
3yr

I couldn't reproduce this slowness.

Reputation


Community

Vitals
0
Reputation
0.000
Spam rating
0
Sum total rating
0.000
Average rating
--
Positive ratio
0
Sum total rating
--
Average rating
--
Positive ratio