Display 
everybody'severybody's
notes  in  learning

Refine: news, tech
Wednesday, June 19, 05:56PM  by:shuri
deep,
computer vision,
cvpr 2018,
"deep,
recognition",
"pattern,
learning",
"machine,
vision",
"computer,
Viewable by:

source CVPR18: Tutorial: Part 1: Visual Recognition and Beyond
Organizers: Kaiming He, Ross Girshick, Alex Kirillov, Georgia Gkioxari, Justin Johnson Description: This tutorial covers topics at the frontier of re-search ...
Wednesday, June 19, 02:39PM  by:shuri
Viewable by:

source Understanding Images in Vision Framework - WWDC 2019 - Videos - Apple Developer
Learn all about the many advances in the Vision Framework including effortless image classification, image saliency, determining image...
Tuesday, June 18, 08:12PM  by:shuri
deep,
deep learning,
cvpr17,
tutorial,
"deep,
recognition",
"pattern,
learning",
"machine,
vision",
"computer,
Viewable by:

source Tutorial: Deep Learning for Objects and Scenes - Part 1
Learning Deep Representations for Visual Recognition, Kaiming He (Facebook AI Research) Deep Learning for Object Detection, Ross Girshick (Facebook AI Resear...
Tuesday, June 18, 08:03PM  by:shuri
Viewable by:

source ModaNet: A Large-scale Street Fashion Dataset with Polygon Annotations
Searching for an ideal dress or pair of shoes sometimes could be challenging, especially when you do not know the best keywords to describe what you are looking for. Luckily, the emerging smart mobile devices provide an efficient and convenient way to capture those products of interest in your photo album. The next natural thing is letting an ecommerce app like eBay figure it out for you.
Tuesday, June 18, 08:00PM  by:shuri
Viewable by:

source DeepFashion2: A Versatile Benchmark for Fashion Image Understanding
Even as fashion image analysis gets more traction from today’s image recognition researchers, understanding fashion images remains challenging for real-world applications due to large deformations…
Saturday, June 08, 12:43PM  by:shuri
deep,
simulation,
seattle,
robotics,
robot,
re:mars 2019,
machine learning,
amazon,
Viewable by:

source How Amazon’s delivery robots will navigate your sidewalk – TechCrunch
Earlier this year, Amazon announced its Scout sidewalk delivery robot. At the time, details were sparse, except for the fact that the company had started to make deliveries in a neighborhood in Washington State. Today, at Amazon’s re:Mars conference, I sat down with Sean Scott, the VP in charge of Scout, to talk about how […]
Sunday, May 26, 11:48AM  by:shuri
Viewable by:

source Knowledge extraction from unstructured texts
Foreword There is an unreasonable amount of information that can be extracted from what people publicly say on the internet. At Heuritech we use this information to better understand what people want, which products they like and why. This post explains from a scientific point of view what is Knowledge extraction and details a few…
Monday, April 29, 04:07AM  by:shuri
deep,
projects,
science,
data,
resources,
machine learning,
learn python,
guest post,
ai,
Viewable by:

source Top 20 Python AI and Machine Learning Open Source Projects – Dataquest
Great data science programs give real-world practice. Use these open-sourced projects to get started with machine learning and artificial intelligence today.
Wednesday, March 27, 07:37PM  by:shuri
deep,
york,
new,
toronto,
montreal,
of,
university,
machinery,
computing,
for,
assn,
decorations and honors,
yoshua bengio,
yann lecun,
geoffrey e,
hinton,
technology,
nyu,
university of toronto,
university of montreal,
assn for computing machinery,
awards,
research,
artificial intelligence,
geoffrey e hinton,
Viewable by:

source Turing Award Won by 3 Pioneers in Artificial Intelligence
For their work on neural networks, Geoffrey Hinton, Yann LeCun and Yoshua Bengio will share $1 million for what many consider the Nobel Prize of computing.
Thursday, March 07, 01:53PM  by:shuri
deep,
giphy api key,
gifs for android,
gifs for ios,
gif apps,
gif applcations,
gif api,
animated gif api,
giphy api,
giphy developers,
Viewable by:
Wednesday, March 06, 08:02PM  by:shuri
Viewable by:

source Launching TensorFlow Lite for Microcontrollers
I've been spending a lot of my time over the last year working on getting machine learning running on microcontrollers, and so it was great to finally start talking about it in public for the first time today at the TensorFlow Developer Summit. Even better, I was able to demonstrate TensorFlow Lite running on a Cortex…
Wednesday, February 27, 09:54PM  by:shuri
Viewable by:

source Beyond Local Pattern Matching: Recent Advances in Machine Reading
Have you ever Googled some random question, such as how many countries are there in the world, and been impressed to see Google presenting the precise answer to you rather than just a list of links? This feature is clearly nifty and useful, but is also still limited; a search for a slightly more complex question such as how long do I need to bike to burn the calories in a Big Mac will not yield a nice answer, even though any person could look over the content of the first or second link and find the answer.
Thursday, January 24, 03:33PM  by:shuri
Viewable by:

source Twitch
Twitch is the world's leading video platform and community for gamers.
Saturday, January 12, 10:36PM  by:shuri
Viewable by:

source IBM teaches AI to debate humans by crowdsourcing arguments
IBM's AI wants to take on all-comers in debates on every topic. But, first, its going to crowdsource its arguments from humans online and at CES 2019.
Friday, January 11, 05:24PM  by:shuri
Viewable by:

source BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  1. ELMo (Peterset al., 2018), - Matthew  Peters,  Mark  Neumann,  Mohit  Iyyer,  Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018.  Deep contextualized word rep- resentations. In NAACL
  2. Generative Pre-trained Transformer (OpenAIGPT) (Radford et al., 2018) - Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing with unsupervised learning. Technical re- port, OpenAI.
  3. Transformer (Vaswani et al., 2017) - Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems , pages 6000–6010.
  4. SQuAD question answering (Rajpurkar et al., 2016) - Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 .
  5. Masked language model - inspired by  by the Cloze task (Taylor, 1953).
  6. introduce a “next sentence prediction” task that jointly pre-trains text-pair representations.
  7. natural language inference (Bowman et al., 2015; Williams et al., 2018)
  8. There are two existing strategies for applying pre-trained language representations to downstream tasks: feature-based and fine-tuning.
  9. The feature-based approach, such as ELMo (Peters et al., 2018), uses tasks-specific architectures that include the pre-trained representations as additional features.
  10. The fine-tuning approach, such as the Generative Pre-trained Transformer (OpenAI GPT) (Radford et al., 2018),
  11. The major limitation is that standard language models are unidirectional,
  12. In addition to the masked language model, we also introduce a “next sentence prediction” task that jointly pre-trains text-pair representations.
  13. ELMo advances the state-of-the-art for several major NLP benchmarks (Peters et al., 2018) including question answering (Rajpurkar et al., 2016) on SQuAD, sentiment analysis (Socher et al., 2013), and named entity recognition (Tjong Kim Sang and De Meulder, 2003).
  14. A recent trend in transfer learning from language
    models (LMs) is to pre-train some model architecture
    on a LM objective before fine-tuning
    that same model for a supervised downstream
    task (Dai and Le, 2015; Howard and Ruder, 2018;
    Radford et al., 2018)
  15. The advantage of these approaches is that few parameters need to be learned from scratch. At least partly due this advantage, OpenAI GPT (Radford et al., 2018) achieved previously state-of-the-art results on many sentencelevel tasks from the GLUE benchmark (Wang et al., 2018).
  16. transfer from supervised tasks with large datasets, such as natural language inference (Conneau et al., 2017) and machine translation (Mc- Cann et al., 2017).
  17. We use WordPiece embeddings (Wu et al., 2016) with a 30,000 token vocabulary
  18. denoising auto-encoders (Vincent et al., 2008)
  19. Adam with learning rate of 1e-4, 1 = 0:9,
    2 = 0:999, L2 weight decay of 0:01, learning
    rate warmup over the first 10,000 steps, and linear
    decay of the learning rate.
  20. We use a dropout probability of 0.1 on all layers.
  21. We use a gelu activation (Hendrycks and Gimpel, 2016) rather than the standard relu,
  22. training loss is the sum of the mean masked LM likelihood and mean next sentence prediction likelihood.
  23. We also observed that large data sets (e.g., 100k+ labeled training examples) were far less sensitive to hyperparameter choice than small data sets. Fine-tuning is typically very fast, so it is reasonable to simply run an exhaustive search over the above parameters and choose the model that performs best on the development set.
Friday, January 11, 03:22PM  by:shuri
deep,
overviews,
language,
Viewable by: