Posts

Showing posts from June, 2018

Data Ethics Framework

Data ethics is an emerging branch of applied ethics which describes the value judgements and approaches we make when generating, analysing and disseminating data. This includes a sound knowledge of data protection law and other relevant legislation, and the appropriate use of new technologies. It requires a holistic approach incorporating good practice in computing techniques, ethics and information assurance. The Data Ethics Framework consists of 3 parts:     the data ethics principles     additional guidance for each principle in the framework     a workbook to help your team record the ethical decisions you’ve made about your project The Data Ethics Framework principles Your project, service or procured software should be assessed against the 7 data ethics principles. 1. Start with clear user need and public benefit Using data in more innovative ways has the potential to transform how public services are delivered. We must always be clear about what we are trying to achieve

SFIA7 - The seventh major version of the Skills Framework for the Information Age

Image
First published in 2000, SFIA has evolved through successive updates as a result of expert input by its global users to ensure that, first and foremost, it remains relevant and useful to the needs of the industry and business.  SFIA 7, as with previous updates, is an evolution. It has been updated in response to many change requests: many of the existing skills have been updated and a few additional ones introduced but the key concepts and essential values of SFIA remain true, as they have done for nearly 20 years. The structure has remained the same – 7 levels of responsibility characterised by generic attributes, along with many professional skills and competencies described at one or more of those 7 levels.  The SFIA standard covers the full breadth of the skills and competencies related to information and communication technologies, digital transformation and software engineering. SFIA is also often applied to a range of other technical endeavours. As we

Self-Imitation Learning (SIL)

Image
This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks. Details >>

The Forrester Wave Big Data Fabric, Q2 2018

Image
Key Takeaways Talend, Denodo Technologies, Oracle, IBM, And Paxata Lead The Pack Forrester's research uncovered a market in which Talend, Denodo Technologies, Oracle, IBM, and Paxata are Leaders; Hortonworks, Cambridge Semantics, SAP, Trifacta, Cloudera, and Syncsort are Strong Performers; and Podium Data, TIBCO Software, Informatica, and Hitachi Vantara are Contenders. EA Pros Are Looking To Support Multiple Use Cases With Big Data Fabric The big data fabric market is growing because more EA pros see big data fabric as critical for their enterprise big data strategy. Scale, Performance, AI/Machine Learning, And Use-Case Support Are Key Differentiators The Leaders we identified support a broader set of use cases, enhanced AI and machine learning capabilities, and offer good scalability features.

Apache Hadoop 3.1- a Giant Leap for Big Data

Image
Use Cases When we are in the outdoors, many of us often feel the need for a camera- that is intelligent enough to follow us, adjust to the terrain heights and visually navigate through the obstacles, while capturing panoramic videos.  Here, I am talking about autonomous self-flying drones, very similar to cars on auto pilot. The difference is that we are starting to see proliferation of artificial intelligence into affordable, everyday use cases, compared to relatively expensive cars. These new use cases mean: (1) They will need parallel compute processing to crunch through insane amount of data (visual or otherwise) in real time for inferences and training of deep learning neural network algorithms. This helps them distinguish between objects and get better with more data. Think like a leap of compute processing by 100x, due to the real time nature of the use cases (2) They will need the deep learning software frameworks, so that data scientists & data engineer