Sequence to Sequence Learning
with Tensor2Tensor
Łukasz Kaiser and Ryan Sepassi
• Intro
• Basics
• Tensor view of neural networks
• TensorFlow core and higher-level APIs, Tensor2Tensor
• Exercise: understand T2T pipeline fully, try on MNIST
• Sequence models
• Basics
• Transformer
• Exercise: train basic sequence models, use Transformer
• Outlook: deep learning and Tensor2Tensor community
But Why?
(Tom Bianco, datanami.com)
Speed
TPUv2: 180 TF/2$/h
TPUv2 pod: 11.5 PF
TPUv3 pod: over 100 PF
Top supercomputer: 122
PF
(Double precision, could be over 1
exaflop for ML applications.)
ML Arxiv Papers per Year
~50 New ML papers every day!
Rapid accuracy improvements
Image courtesy of Canziani et al, 2017
2012
2014-2015
2015-2016
2017
Radically open culture
How Deep Learning Quietly
Revolutionized NLP (2016)
What NLP tasks are we talking about?
● Part Of Speech Tagging Assign part-of-speech to each word.
● Parsing Create a grammar tree given a sentence.
● Named Entity Recognition Recognize people, places, etc. in a sentence.
● Language Modeling Generate natural sentences.
● Translation Translate a sentence into another language.
● Sentence Compression Remove words to summarize a sentence.
● Abstractive Summarization Summarize a paragraph in new words.
● Question Answering Answer a question, maybe given a passage.
● ….
Can deep learning solve these tasks?
● Inputs and outputs have variable size, how can neural networks handle it?
● Recurrent Neural Networks can do it, but how do we train them?
● Long Short-Term Memory [Hochreiter et al., 1997], but how to compose it?
● Encoder-Decoder (sequence-to-sequence) architectures
[Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014]
Parsing with sequence-to-sequence LSTMs
(1) Represent the tree as a sequence.
(2) Generate data and train a sequence-to-sequence LSTM model.
(3) Results: 92.8 F1 score vs 92.4 previous best [Vinyals & Kaiser et al., 2014]
Language modeling with LSTMs
Language model performance is measured in perplexity (lower is better).
● Kneser-Ney 5-gram: 67.6 [Chelba et al., 2013]
● RNN-1024 + 9-gram: 51.3 [Chelba et al., 2013]
● LSTM-512-512: 54.1 [Józefowicz et al., 2016]
● 2-layer LSTM-8192-1024: 30.6 [Józefowicz et al., 2016]
● 2-l.-LSTM-4096-1024+MoE: 28.0 [Shazeer & Mirhoseini et al., 2016]
Model size seems to be the decisive factor.
Language modeling with LSTMs: Examples
Raw (not hand-selected) sampled sentences: [Józefowicz et al., 2016]
About 800 people gathered at Hever Castle on Long Beach from noon to 2pm ,
three to four times that of the funeral cortege .
It is now known that coffee and cacao products can do no harm on the body .
Yuri Zhirkov was in attendance at the Stamford Bridge at the start of the second
half but neither Drogba nor Malouda was able to push on through the Barcelona
defence .
Sentence compression with LSTMs
Example:
Input: State Sen. Stewart Greenleaf discusses his proposed human
trafficking bill at Calvery Baptist Church in Willow Grove Thursday night.
Output: Stewart Greenleaf discusses his human trafficking bill.
Results: readability informativeness
MIRA (previous best): 4.31 3.55
LSTM [Filippova et al., 2015]: 4.51 3.78
Translation with LSTMs
Translation performance is measured in BLEU scores (higher is better, EnDe):
● Phrase-Based MT: 20.7 [Durrani et al., 2014]
● Early LSTM model: 19.4 [Sébastien et al., 2015]
● DeepAtt (large LSTM): 20.6 [Zhou et al., 2016]
● GNMT (large LSTM): 24.9 [Wu et al., 2016]
● GNMT+MoE: 26.0 [Shazeer & Mirhoseini et al., 2016]
Again, model size and tuning seem to be the decisive factor.
Translation with LSTMs: Examples
German:
Probleme kann man niemals mit derselben Denkweise lösen, durch die sie
entstanden sind.
PBMT Translate: GNMT Translate:
No problem can be solved from Problems can never be solved
the same consciousness that with the same way of thinking
they have arisen. that caused them.
Translation with LSTMs: How good is it?
PBMT GNMT Human Relative improvement
English → Spanish 4.885 5.428 5.504 87%
English → French 4.932 5.295 5.496 64%
English → Chinese 4.035 4.594 4.987 58%
Spanish → English 4.872 5.187 5.372 63%
French → English 5.046 5.343 5.404 83%
Chinese → English 3.694 4.263 4.636 60%
Google Translate production data, median score by human evaluation on the scale 0-6. [Wu et al., ‘16]
That was 2016. Now.
Attention: Machine Translation Results
29.7
Basics
Old School View
Convolutions
(Illustration from machinelearninguru.com)
Modern View
h = f(Wx + B) [or h = conv(W, x)]
o = f(W’h + B’)
l = -logp(o = true)
P -= lr * dl/dP where P =
{W,W’,B,B’}
So what do we need? h = f(Wx + B)
o = f(W’h + B’)
l = -logp(o = true)
P -= lr * dl/dP1. Operations like matmul, f done fast
2. Gradients symbolically for dl/dP
3. Specify W,W’ and keep track of them
4. Run it on a large scale
See this online course for a nice introduction:
https://www.coursera.org/learn/machine-learning
TensorFlow
Core TF Model
Yet another dataflow system
MatMul
Add Relu
biases
weights
examples
labels
Xent
Graph of Nodes, also called Operations or ops.
Yet another dataflow systemwith tensors
MatMul
Add Relu
biases
weights
examples
labels
Xent
Edges are N-dimensional arrays: Tensors
Yet another dataflow systemwith state
Add Mul
biases
...
learning rate
−=...
'Biases' is a variable −= updates biasesSome ops compute gradients
Device A Device B
Yet another dataflow systemdistributed
Add Mul
biases
...
learning rate
−=...
Devices: Processes, Machines, GPUs, etc
What's not in the Core Model
● Anything about neural networks, machine learning, ...
● Anything about backpropagation, differentiation, ...
● Anything about gradient descent, parameter servers…
These are built by combining existing operations, or defining new operations.
Core system can be applied to other problems than machine learning.
Core TF API
API Families
Graph Construction
● Assemble a Graph of Operations.
Graph Execution
● Deploy and execute operations in a Graph.
Hello, world!
import tensorflow as tf
# Create an operation.
hello = tf.constant("Hello, world!")
# Create a session.
sess = tf.Session()
# Execute that operation and print its result.
print sess.run(hello)
Graph Construction
Library of predefined Ops
● Constant, Variables, Math ops, etc.
Functions to add Ops for common needs
● Gradients: Add Ops to compute derivatives.
● Training methods: Add Ops to update variables (SGD, Adagrad, etc.)
All operations are added to a global Default Graph.
Slightly more advanced calls let you control the Graph more precisely.
Op that holds state that persists across calls to Run()
v = tf.get_variable(‘v’, [4, 3]) # 4x3 matrix, float by default
Variable State
Variable
Value Reference
Some Ops modify the Variable state: InitVariable, Assign, AssignSub, AssignAdd.
init = v.assign(tf.random_uniform(shape=v.shape))
Variables State
Variable
Value Reference
Random
Parameters
Assign
Updates the variable value when run.
Outputs the value for convenienceState
Variable
Math Ops
A variety of Operations for linear algebra, convolutions, etc.
c = tf.constant(...)
w = tf.get_variable(...)
b = tf.get_variable(...)
y = tf.add(tf.matmul(c, w), b)
Overloaded Python operators help: y = tf.matmul(c, w) + b
w
c
MatMul
b
Add
Operations, plenty of them
● Array ops
○ Concat
○ Slice
○ Reshape
○ ...
● Math ops
○ Linear algebra (MatMul, …)
○ Component-wise ops (Mul, ...)
○ Reduction ops (Sum, …)
Documentation at tensorflow.org
● Neural network ops
○ Non-linearities (Relu, …)
○ Convolutions (Conv2D, …)
○ Pooling (AvgPool, …)
● ...and many more
○ Constants, Data flow, Control flow,
Embedding, Initialization, I/O, Legacy
Input Layers, Logging, Random,
Sparse, State, Summary, etc.
Graph Construction Helpers
● Gradients
● Optimizers
● Higher-Level APIs in core TF
● Higher-Level libraries outside core TF
Gradients
Given a loss, add Ops to compute gradients for Variables.
var1
var0 Op
Op
Op
loss
many ops
Gradients
tf.gradients(loss, [var0, var1]) # Generate gradients
var1
var0 Op
Op
Op
loss
many ops
Op
Op
many opsGradients for var0
Gradients for var1 Op
Example
Gradients for MatMul
MatMul
MatMul
Transpose
Transpose
MatMul gw
gx
y
x
w
x
gy
w
Optimizers
Apply gradients to Variables: SGD(var, grad, learning_rate)
var
AssignSub
Mul
grad
Note: learning_rate is just output of an Op, it can easily be decayed
learning_rate
Easily Add Optimizers
Builtin
● SGD, Adagrad, Momentum, Adam, …
Contributed
● LazyAdam, NAdam, YellowFin, Adafactor, ...
Putting all together to train a Neural
Net
Build a Graph by adding Operations:
● For Variables to hold the parameters of the Neural Net.
● To compute the Neural Net output: e.g. classification predictions.
● To compute a training loss: e.g. cross entropy, parameter L2 norms.
● To calculate gradients for the parameters to train.
● To apply gradients with a training function.
Distributed Execution
Graph Execution
Session API
● API to deploy a Graph in a Tensorflow runtime
● Can run any subset of the graph
● Can add Ops to an existing Graph (for interactive use in colab for example)
Training Utilities
● Checkpoint, Recovery, Summaries, Replicas, etc.
Python Program
create graph
create session
sess.run()
Local Runtime
Runtime
Session
CPU
GPU
Python Program
create graph
create session
sess.run()
Remote Runtime
Session
Master
Worker
CPU
Worker
CPU
GPU
Worker
CPU
GPU
Run([ops])
RunSubGraph()
GetTensor()
CreateGraph()
Running and fetching output
an op Fetch
# Run an Op and fetch its output.
# "values" is a numpy ndarray.
values = sess.run(<an op output>)
Running and fetching output
an op Fetch
Transitive closure of needed ops is Run
Execution happens in parallel
Feeding input, Running, and Fetching
a
an op Fetch
Feed
a_val = ...a numpy ndarray...
values = sess.run(<an op output>,
feed_input({<a output>: a_val})
Feeding input, Running, and Fetching
a
an op Fetch
Feed
Only the required Ops are run.
Higher-Level Core TF API
Layers are ops that create Variables
def embedding(x, vocab_size, dense_size,
name=None, reuse=None, multiplier=1.0):
"""Embed x of type int64 into dense vectors."""
with tf.variable_scope( # Use scopes like this.
name, default_name="emb", values=[x], reuse=reuse):
embedding_var = tf.get_variable(
"kernel", [vocab_size, dense_size])
return tf.gather(embedding_var, x)
Models are built from Layers
def bytenet(inputs, targets, hparams):
final_encoder = common_layers.residual_dilated_conv(
inputs, hparams.num_block_repeat, "SAME", "encoder", hparams)
shifted_targets = common_layers.shift_left(targets)
kernel = (hparams.kernel_height, hparams.kernel_width)
decoder_start = common_layers.conv_block(
tf.concat([final_encoder, shifted_targets], axis=3),
hparams.hidden_size, [((1, 1), kernel)], padding="LEFT")
return common_layers.residual_dilated_conv(
decoder_start, hparams.num_block_repeat,
"LEFT", "decoder", hparams)
Training Utilities
Training program typically runs multiple threads
● Execute the training op in a loop.
● Checkpoint every so often.
● Gather summaries for the Visualizer.
● Other, eg. monitors Nans, costs, etc.
Estimator
So what do we need? h = f(Wx + B)
o = f(W’h + B’)
l = -logp(o = y)
P -= lr * dl/dP1. Operations like matmul, f done fast
2. Gradients symbolically for dl/dP
3. Specify W,W’ and keep track of them
4. Run it on a large scale
TensorFlow view:
h = tf.layers.dense(x, h_size, name=”h1”)
o = tf.layers.dense(h, 1, name=”output”)
l = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=o, labels=y)
But data? Where do we get {x,y} from?
Tensor2Tensor
Tensor2Tensor (T2T) is a library of deep learning models and
datasets designed to accelerate deep learning research and
make it more accessible.
● Datasets: ImageNet, CIFAR, MNIST, Coco, WMT, LM1B, ...
● Models: ResNet, RevNet, ShakeShake, Xception, SliceNet,
Transformer, ByteNet, Neural GPU, LSTM, ...
● Tools: cloud training, hyperparameter tuning, TPU, ...
So what do we need? h = f(Wx + B)
o = f(W’h + B’)
l = -logp(o = true)
P -= lr * dl/dP1. Operations like matmul, f done fast
2. Gradients symbolically for dl/dP
3. Specify W,W’ and keep track of them
4. Run it on a large scale
TensorFlow: goo.gl/njJftZ
x, y = mnist.dataset
h = tf.layers.dense(x, h_size, name=”h1”)
o = tf.layers.dense(h, 1, name=”output”)
l = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=o, labels=y)
Play with the colab
goo.gl/njJftZ
● Try pure SGD instead of the Adam optimizer and others like AdaFactor
○ Find in tensorflow.org where is the API and how optimizers are called
○ Find the AdaFactor paper on arxiv and read it; use it from Tensor2Tensor
● Try other layer sizes and numbers of layers, other activation functions.
● Try running a few times, how does initialization affect results?
● Try running on Cifar10, how does your model perform?
● Make a convolutional model, is it better? (tf.layers.dense -> tf.layers.conv2d)
● Try residual connections through conv layers, check out shake-shake in T2T
Sequence Models
RNNs Everywhere
Sequence to Sequence Learning with Neural Networks
Auto-Regressive CNNs
WaveNet and ByteNet
Transformer
Based on Attention Is All You Need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin and other works with Samy Bengio, Eugene Brevdo, Francois Chollet,
Stephan Gouws, Nal Kalchbrenner, Ofir Nachum, Aurko Roy, Ryan Sepassi.
Attention
Convolution Attention
Dot-Product Attention
k0
v0
k1
v1
k2
v2
q0
q1
AA
Dot-Product Attention
def dot_product_attention(q, k, v, bias, dropout_rate=0.0, image_shapes=None, name=None,
make_image_summary=True, save_weights_to=None, dropout_broadcast_dims=None):
with tf.variable_scope(
name, default_name="dot_product_attention", values=[q, k, v]) as scope:
# [batch, num_heads, query_length, memory_length]
logits = tf.matmul(q, k, transpose_b=True)
if bias is not None:
logits += bias
weights = tf.nn.softmax(logits, name="attention_weights")
if save_weights_to is not None:
save_weights_to[scope.name] = weights
# dropping out the attention links for each of the heads
weights = common_layers.dropout_with_broadcast_dims(
weights, 1.0 - dropout_rate, broadcast_dims=dropout_broadcast_dims)
if expert_utils.should_generate_summaries() and make_image_summary:
attention_image_summary(weights, image_shapes)
return tf.matmul(weights, v)
Ops Activations
Attention (dot-prod) n2
· d n2
+ n · d
Attention (additive) n2
· d n2
· d
Recurrent n · d2
n · d
Convolutional n · d2
n · d
n = sequence length d = depth k = kernel size
What’s missing from Self-Attention?
Convolution Self-Attention
What’s missing from Self-Attention?
Convolution Self-Attention
● Convolution: a different linear transformation for each relative position.
Allows you to distinguish what information came from where.
● Self-Attention: a weighted average :(
The Fix: Multi-Head Attention
Convolution Multi-Head Attention
● Multiple attention layers (heads) in parallel (shown by different colors)
● Each head uses different linear transformations.
● Different heads can learn different relationships.
The Fix: Multi-Head Attention
The Fix: Multi-Head Attention
Ops Activations
Multi-Head Attention
with linear transformations.
For each of the h heads,
dq
= dk
= dv
= d/h
n2
· d + n · d2
n2
· h + n · d
Recurrent n · d2
n · d
Convolutional n · d2
n · d
n = sequence length d = depth k = kernel size
Three ways of attention
Encoder-Decoder Attention
Encoder Self-Attention MaskedDecoder Self-Attention
The Transformer
Machine Translation Results: WMT-14
29.1 41.8
Ablations
Coreference resolution (Winograd schemas)
Coreference resolution (Winograd schemas)
Sentence Google Translate Transformer
The cow ate the hay because it
was delicious.
La vache mangeait le foin
parce qu'elle était délicieuse.
La vache a mangé le foin parce
qu'il était délicieux.
The cow ate the hay because it
was hungry.
La vache mangeait le foin
parce qu'elle avait faim.
La vache mangeait le foin
parce qu'elle avait faim.
The women stopped drinking
the wines because they were
carcinogenic.
Les femmes ont cessé de boire
les vins parce qu'ils étaient
cancérogènes.
Les femmes ont cessé de boire
les vins parce qu'ils étaient
cancérigènes.
The women stopped drinking
the wines because they were
pregnant.
Les femmes ont cessé de boire
les vins parce qu'ils étaient
enceintes.
Les femmes ont cessé de boire
les vins parce qu'elles étaient
enceintes.
The city councilmen refused the
female demonstrators a permit
because they advocated
violence.
Les conseillers municipaux ont
refusé aux femmes
manifestantes un permis parce
qu'ils préconisaient la violence.
Le conseil municipal a refusé
aux manifestantes un permis
parce qu'elles prônaient la
violence.
The city councilmen refused the
female demonstrators a permit
because they feared violence.
Les conseillers municipaux ont
refusé aux femmes
manifestantes un permis parce
qu'ils craignaient la violence
Le conseil municipal a refusé
aux manifestantes un permis
parce qu'elles craignaient la
violence.*
Long Text Generation
Generating entire Wikipedia
articles by summarizing top
search results and references.
(Memory-Compressed Attn.)
'''The Transformer''' are a Japanese [[hardcore punk]] band.
==Early years==
The band was formed in 1968, during the height of Japanese music
history. Among the legendary [[Japanese people|Japanese]] composers of
[Japanese lyrics], they prominently exemplified Motohiro Oda's
especially tasty lyrics and psychedelic intention. Michio was a
longtime member of the every Sunday night band PSM. His alluring was
of such importance as being the man who ignored the already successful
image and that he municipal makeup whose parents were&amp;nbsp;– the
band was called
Jenei.&lt;ref&gt;http://www.separatist.org/se_frontend/post-punk-musician-the-kidney.html&lt;/ref&gt;
From a young age the band was very close, thus opting to pioneer what
From a young age the band was very close, thus opting to pioneer what
had actually begun as a more manageable core hardcore punk
band.&lt;ref&gt;http://www.talkradio.net/article/independent-music-fades-from-the-closed-drawings-out&lt;/ref&gt;
==History==
===Born from the heavy metal revolution===
In 1977 the self-proclaimed King of Tesponsors, [[Joe Lus:
: It was somewhere... it was just a guile ... taking this song to
Broadway. It was the first record I ever heard on A.M., After some
opposition I received at the hands of Parsons, and in the follow-up
notes myself.&lt;ref&gt;http://www.discogs.com/artist/The+Op%C5%8Dn+&amp;+Psalm&lt;/ref&gt;
The band cut their first record album titled ''Transformed, furthered
The band cut their first record album titled ''Transformed, furthered
and extended Extended'',&lt;ref&gt;[https://www.discogs.com/album/69771
MC – Transformed EP (CDR) by The Moondrawn – EMI, 1994]&lt;/ref&gt;
and in 1978 the official band line-up of the three-piece pop-punk-rock
band TEEM. They generally played around [[Japan]], growing from the
Top 40 standard.
===1981-2010: The band to break away===
On 1 January 1981 bassist Michio Kono, and the members of the original
line-up emerged. Niji Fukune and his [[Head poet|Head]] band (now
guitarist) Kazuya Kouda left the band in the hands of the band at the
May 28, 1981, benefit season of [[Led Zeppelin]]'s Marmarin building.
In June 1987, Kono joined the band as a full-time drummer, playing a
few nights in a 4 or 5 hour stint with [[D-beat]]. Kono played through
the mid-1950s, at Shinlie, continued to play concerts with drummers in
Ibis, Cor, and a few at the Leo Somu Studio in Japan. In 1987, Kono
recruited new bassist Michio Kono and drummer Ayaka Kurobe as drummer
for band. Kono played trumpet with supplement music with Saint Etienne
as a drummer. Over the next few years Kono played as drummer and would
get many alumni news invitations to the bands' ''Toys Beach'' section.
In 1999 he joined the [[CT-182]].
His successor was Barrie Bell on a cover of [[Jethro Tull
(band)|Jethro Tull]]'s original 1967 hit &quot;Back Home&quot; (last
appearance was in Jethro), with whom he shares a name.
===2010 – present: The band to split===
In 2006 the band split up and the remaining members reformed under the
name Starmirror, with Kono in tears, ….
'''''The Transformer''''' is a [[book]] by British [[illuminatist]]
[[Herman Muirhead]], set in a post-apocalyptic world that border on a
mysterious alien known as the &quot;Transformer Planet&quot; which is
his trademark to save Earth. The book is about 25 years old, and it
contains forty-one different demographic models of the human race, as
in the cases of two fictional
''groups'',&amp;nbsp;''[[Robtobeau]]''&amp;nbsp;&quot;Richard&quot;
and &quot;The Transformers Planet&quot;.
== Summary ==
The book benefits on the [[3-D film|3-D film]], taking his one-third
of the world's pure &quot;answer&quot; and gas age from 30 to 70
within its confines.
The book covers the world of the world of [[Area 51|Binoculars]] from
around the worlds of Earth. It is judged by the ability of
[[telepathy|telepaths]] and [[television]], and provides color, line,
and end-to-end observational work.
and end-to-end observational work.
To make the book up and document the recoverable quantum states of the
universe, in order to inspire a generation that fantasy producing a
tele-recording-offering machine is ideal. To make portions of this
universe home, he recreates the rostrum obstacle-oriented framework
Minou.&lt;ref&gt;http://www.rewunting.net/voir/BestatNew/2007/press/Story.html)&lt;/ref&gt;
== ''The Transformer''==
The book was the first on a [[Random Access Album|re-issue]] since its
original version of ''[[Robtobeau]]'', despite the band naming itself
a &quot;Transformer Planet&quot; in the book.&lt;ref
name=prweb-the-1985&gt;{{cite
web|url=http://www.prnewswire.co.uk/cgi/news/release?id=9010884|title=''The
Transformer''|publisher=www.prnewswire.co.uk|date=|accessdate=2012-04-25}}&lt;/ref&gt;
Today, &quot;[[The Transformers Planet]]&quot; is played entirely
open-ended, there are more than just the four previously separate only
bands. A number of its groups will live on one abandoned volcano in
North America,
===Conceptual ''The Transformer'' universe===
Principals a setting-man named “The Supercongo Planet,” who is a
naturalistic device transferring voice and humour from ''The
Transformer Planet,'' whose two vice-maks appear often in this
universe existence, and what the project in general are trying to
highlight many societal institutions. Because of the way that the
corporation has made it, loneliness, confidence, research and renting
out these universes are difficult to organise without the bands
creating their own universe. The scientist is none other than a singer
and musician. Power plants are not only problematic, but if they want
programmed them to create and perform the world's first Broadcast of
itself once the universe started, but deliberately Acta Biological
Station, db.us and BB on ''The Transformer Planet'', ''The Transformer
Planet'', aren't other things Scheduled for.
:&lt;blockquote&gt;A man called Dick Latanii Bartow, known the
greatest radio dot Wonderland administrator at influential arrangers
in a craze over the complex World of Biological Predacial Engineer in
Rodel bringing Earth into a 'sortjob' with fans. During this
'Socpurportedly Human', Conspiracy was being released to the world as
Baron Maadia on planet Nature. A world-renowned scientist named Julia
Samur is able to cosmouncish society and run for it - except us who is
he and he is before talking this entire T100 before Cell physiologist
Cygnets. Also, the hypnotic Mr. Mattei arrived, so it is Mischief who
over-manages for himself - but a rising duplicate of Phil Rideout
makes it almost affable. There is plenty of people at work to make
use of it and animal allies out of politics. But Someday in 1964, when
we were around, we were steadfast against the one man's machine and he
did an amazing job at the toe of the mysterious...
Mr. Suki who is an engineering desk lecturer at the University of}}}}
…………….
Image Generation
Model Type % unrecognized
(max = 50%)
ResNet 4.0%
Superresolution GAN
(Garcia’16)
8.5%
PixelRecursive
(Dahl et al., 2017)
11%
Image Transformer 36.9%
How about GANs?
(Are GANs Created Equal? A Large-Scale Study)
Problem 1: Variance
Problem 2: Even best models are not great:
Image Transformer: 36.6
Play with the colab
goo.gl/njJftZ
● Try a pre-trained Transformer on translation, see attentions.
● See https://jalammar.github.io/illustrated-transformer/
● Add Transformer layer on the previous sequence tasks, try it.
● Try the non-deterministic sequence task: 50% copy / 50% repeat-even:
○ See that previous sequence model fails on unclear outputs
○ Add auto-regressive part and attention
○ See that the new model is 50% correct (best possible)
○ *Does it generalize less with attention? Why? What could be done?
How do I get it?
Tensor2Tensor
Tensor2Tensor
Tensor2Tensor (T2T) is a library of deep learning models
and datasets designed to make deep learning more
accessible and accelerate ML research.
● Datasets: ImageNet, CIFAR, MNIST, Coco, WMT, LM1B,
...
● Models: ResNet, RevNet, ShakeShake, Xception, SliceNet,
Transformer, ByteNet, Neural GPU, LSTM, ...
Tensor2Tensor Cutting Edge
Tensor2Tensor Code (github)
● data_generators/ : datasets, must subclass Problem
● models/ : models, must subclass T2TModel
● utils/ , bin/ , etc. : utilities, binaries, cloud helpers, …
pip install tensor2tensor && t2t-trainer 
--generate_data --data_dir=~/t2t_data --output_dir=~/t2t_train/mnist 
--problems=image_mnist --model=shake_shake --hparams_set=shake_shake_quick 
--train_steps=1000 --eval_steps=100
Tensor2Tensor Applications
pip install tensor2tensor && t2t-trainer 
--generate_data --data_dir=~/t2t_data --output_dir=~/t2t_train/dir 
--problems=$P --model=$M --hparams_set=$H
● Translation (state-of-the-art both on speed and accuracy):
$P=translate_ende_wmt32k, $M=transformer, $H=transformer_big
● Image classification (CIFAR, also ImageNet):
$P=image_cifar10, $M=shake_shake, $H=shakeshake_big
● Summarization (CNN):
$P=summarize_cnn_dailymail32k, $M=transformer, $H=transformer_prepend
● Speech recognition (Librispeech):
$P=librispeech, $M=transformer, $H=transformer_librispeech
Why Tensor2Tensor?
● No need to reinvent ML. Best practices and SOTA models.
● Modularity helps. Easy to change models, hparams, data.
● Trains everywhere. Multi-GPU, distributed, Cloud, TPUs.
● Used by Google Brain. Papers, preferred for Cloud TPU LMs.
● Great active community! Find us on github, gitter, groups, ...
Tensor2Tensor + CloudML
How do I train a model on my data?
See the Cloud ML poetry tutorial!
● How to hook up your data to the library of models.
● How to easily run on Cloud ML and use all features.
● How to tune the configuration of a model automatically
Result: Even with 20K data examples it generates poetry!

Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning with Tensor2Tensor

  • 1.
    Sequence to SequenceLearning with Tensor2Tensor Łukasz Kaiser and Ryan Sepassi
  • 2.
    • Intro • Basics •Tensor view of neural networks • TensorFlow core and higher-level APIs, Tensor2Tensor • Exercise: understand T2T pipeline fully, try on MNIST • Sequence models • Basics • Transformer • Exercise: train basic sequence models, use Transformer • Outlook: deep learning and Tensor2Tensor community
  • 3.
  • 4.
    (Tom Bianco, datanami.com) Speed TPUv2:180 TF/2$/h TPUv2 pod: 11.5 PF TPUv3 pod: over 100 PF Top supercomputer: 122 PF (Double precision, could be over 1 exaflop for ML applications.)
  • 5.
    ML Arxiv Papersper Year ~50 New ML papers every day!
  • 6.
    Rapid accuracy improvements Imagecourtesy of Canziani et al, 2017 2012 2014-2015 2015-2016 2017
  • 7.
  • 8.
    How Deep LearningQuietly Revolutionized NLP (2016)
  • 9.
    What NLP tasksare we talking about? ● Part Of Speech Tagging Assign part-of-speech to each word. ● Parsing Create a grammar tree given a sentence. ● Named Entity Recognition Recognize people, places, etc. in a sentence. ● Language Modeling Generate natural sentences. ● Translation Translate a sentence into another language. ● Sentence Compression Remove words to summarize a sentence. ● Abstractive Summarization Summarize a paragraph in new words. ● Question Answering Answer a question, maybe given a passage. ● ….
  • 10.
    Can deep learningsolve these tasks? ● Inputs and outputs have variable size, how can neural networks handle it? ● Recurrent Neural Networks can do it, but how do we train them? ● Long Short-Term Memory [Hochreiter et al., 1997], but how to compose it? ● Encoder-Decoder (sequence-to-sequence) architectures [Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014]
  • 11.
    Parsing with sequence-to-sequenceLSTMs (1) Represent the tree as a sequence. (2) Generate data and train a sequence-to-sequence LSTM model. (3) Results: 92.8 F1 score vs 92.4 previous best [Vinyals & Kaiser et al., 2014]
  • 12.
    Language modeling withLSTMs Language model performance is measured in perplexity (lower is better). ● Kneser-Ney 5-gram: 67.6 [Chelba et al., 2013] ● RNN-1024 + 9-gram: 51.3 [Chelba et al., 2013] ● LSTM-512-512: 54.1 [Józefowicz et al., 2016] ● 2-layer LSTM-8192-1024: 30.6 [Józefowicz et al., 2016] ● 2-l.-LSTM-4096-1024+MoE: 28.0 [Shazeer & Mirhoseini et al., 2016] Model size seems to be the decisive factor.
  • 13.
    Language modeling withLSTMs: Examples Raw (not hand-selected) sampled sentences: [Józefowicz et al., 2016] About 800 people gathered at Hever Castle on Long Beach from noon to 2pm , three to four times that of the funeral cortege . It is now known that coffee and cacao products can do no harm on the body . Yuri Zhirkov was in attendance at the Stamford Bridge at the start of the second half but neither Drogba nor Malouda was able to push on through the Barcelona defence .
  • 14.
    Sentence compression withLSTMs Example: Input: State Sen. Stewart Greenleaf discusses his proposed human trafficking bill at Calvery Baptist Church in Willow Grove Thursday night. Output: Stewart Greenleaf discusses his human trafficking bill. Results: readability informativeness MIRA (previous best): 4.31 3.55 LSTM [Filippova et al., 2015]: 4.51 3.78
  • 15.
    Translation with LSTMs Translationperformance is measured in BLEU scores (higher is better, EnDe): ● Phrase-Based MT: 20.7 [Durrani et al., 2014] ● Early LSTM model: 19.4 [Sébastien et al., 2015] ● DeepAtt (large LSTM): 20.6 [Zhou et al., 2016] ● GNMT (large LSTM): 24.9 [Wu et al., 2016] ● GNMT+MoE: 26.0 [Shazeer & Mirhoseini et al., 2016] Again, model size and tuning seem to be the decisive factor.
  • 16.
    Translation with LSTMs:Examples German: Probleme kann man niemals mit derselben Denkweise lösen, durch die sie entstanden sind. PBMT Translate: GNMT Translate: No problem can be solved from Problems can never be solved the same consciousness that with the same way of thinking they have arisen. that caused them.
  • 17.
    Translation with LSTMs:How good is it? PBMT GNMT Human Relative improvement English → Spanish 4.885 5.428 5.504 87% English → French 4.932 5.295 5.496 64% English → Chinese 4.035 4.594 4.987 58% Spanish → English 4.872 5.187 5.372 63% French → English 5.046 5.343 5.404 83% Chinese → English 3.694 4.263 4.636 60% Google Translate production data, median score by human evaluation on the scale 0-6. [Wu et al., ‘16]
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
    Modern View h =f(Wx + B) [or h = conv(W, x)] o = f(W’h + B’) l = -logp(o = true) P -= lr * dl/dP where P = {W,W’,B,B’}
  • 24.
    So what dowe need? h = f(Wx + B) o = f(W’h + B’) l = -logp(o = true) P -= lr * dl/dP1. Operations like matmul, f done fast 2. Gradients symbolically for dl/dP 3. Specify W,W’ and keep track of them 4. Run it on a large scale See this online course for a nice introduction: https://www.coursera.org/learn/machine-learning
  • 25.
  • 26.
  • 27.
    Yet another dataflowsystem MatMul Add Relu biases weights examples labels Xent Graph of Nodes, also called Operations or ops.
  • 28.
    Yet another dataflowsystemwith tensors MatMul Add Relu biases weights examples labels Xent Edges are N-dimensional arrays: Tensors
  • 29.
    Yet another dataflowsystemwith state Add Mul biases ... learning rate −=... 'Biases' is a variable −= updates biasesSome ops compute gradients
  • 30.
    Device A DeviceB Yet another dataflow systemdistributed Add Mul biases ... learning rate −=... Devices: Processes, Machines, GPUs, etc
  • 31.
    What's not inthe Core Model ● Anything about neural networks, machine learning, ... ● Anything about backpropagation, differentiation, ... ● Anything about gradient descent, parameter servers… These are built by combining existing operations, or defining new operations. Core system can be applied to other problems than machine learning.
  • 32.
  • 33.
    API Families Graph Construction ●Assemble a Graph of Operations. Graph Execution ● Deploy and execute operations in a Graph.
  • 34.
    Hello, world! import tensorflowas tf # Create an operation. hello = tf.constant("Hello, world!") # Create a session. sess = tf.Session() # Execute that operation and print its result. print sess.run(hello)
  • 35.
    Graph Construction Library ofpredefined Ops ● Constant, Variables, Math ops, etc. Functions to add Ops for common needs ● Gradients: Add Ops to compute derivatives. ● Training methods: Add Ops to update variables (SGD, Adagrad, etc.) All operations are added to a global Default Graph. Slightly more advanced calls let you control the Graph more precisely.
  • 36.
    Op that holdsstate that persists across calls to Run() v = tf.get_variable(‘v’, [4, 3]) # 4x3 matrix, float by default Variable State Variable Value Reference
  • 37.
    Some Ops modifythe Variable state: InitVariable, Assign, AssignSub, AssignAdd. init = v.assign(tf.random_uniform(shape=v.shape)) Variables State Variable Value Reference Random Parameters Assign Updates the variable value when run. Outputs the value for convenienceState Variable
  • 38.
    Math Ops A varietyof Operations for linear algebra, convolutions, etc. c = tf.constant(...) w = tf.get_variable(...) b = tf.get_variable(...) y = tf.add(tf.matmul(c, w), b) Overloaded Python operators help: y = tf.matmul(c, w) + b w c MatMul b Add
  • 39.
    Operations, plenty ofthem ● Array ops ○ Concat ○ Slice ○ Reshape ○ ... ● Math ops ○ Linear algebra (MatMul, …) ○ Component-wise ops (Mul, ...) ○ Reduction ops (Sum, …) Documentation at tensorflow.org ● Neural network ops ○ Non-linearities (Relu, …) ○ Convolutions (Conv2D, …) ○ Pooling (AvgPool, …) ● ...and many more ○ Constants, Data flow, Control flow, Embedding, Initialization, I/O, Legacy Input Layers, Logging, Random, Sparse, State, Summary, etc.
  • 40.
    Graph Construction Helpers ●Gradients ● Optimizers ● Higher-Level APIs in core TF ● Higher-Level libraries outside core TF
  • 41.
    Gradients Given a loss,add Ops to compute gradients for Variables. var1 var0 Op Op Op loss many ops
  • 42.
    Gradients tf.gradients(loss, [var0, var1])# Generate gradients var1 var0 Op Op Op loss many ops Op Op many opsGradients for var0 Gradients for var1 Op
  • 43.
  • 44.
    Optimizers Apply gradients toVariables: SGD(var, grad, learning_rate) var AssignSub Mul grad Note: learning_rate is just output of an Op, it can easily be decayed learning_rate
  • 45.
    Easily Add Optimizers Builtin ●SGD, Adagrad, Momentum, Adam, … Contributed ● LazyAdam, NAdam, YellowFin, Adafactor, ...
  • 46.
    Putting all togetherto train a Neural Net Build a Graph by adding Operations: ● For Variables to hold the parameters of the Neural Net. ● To compute the Neural Net output: e.g. classification predictions. ● To compute a training loss: e.g. cross entropy, parameter L2 norms. ● To calculate gradients for the parameters to train. ● To apply gradients with a training function.
  • 47.
  • 48.
    Graph Execution Session API ●API to deploy a Graph in a Tensorflow runtime ● Can run any subset of the graph ● Can add Ops to an existing Graph (for interactive use in colab for example) Training Utilities ● Checkpoint, Recovery, Summaries, Replicas, etc.
  • 49.
    Python Program create graph createsession sess.run() Local Runtime Runtime Session CPU GPU
  • 50.
    Python Program create graph createsession sess.run() Remote Runtime Session Master Worker CPU Worker CPU GPU Worker CPU GPU Run([ops]) RunSubGraph() GetTensor() CreateGraph()
  • 51.
    Running and fetchingoutput an op Fetch # Run an Op and fetch its output. # "values" is a numpy ndarray. values = sess.run(<an op output>)
  • 52.
    Running and fetchingoutput an op Fetch Transitive closure of needed ops is Run Execution happens in parallel
  • 53.
    Feeding input, Running,and Fetching a an op Fetch Feed a_val = ...a numpy ndarray... values = sess.run(<an op output>, feed_input({<a output>: a_val})
  • 54.
    Feeding input, Running,and Fetching a an op Fetch Feed Only the required Ops are run.
  • 55.
  • 56.
    Layers are opsthat create Variables def embedding(x, vocab_size, dense_size, name=None, reuse=None, multiplier=1.0): """Embed x of type int64 into dense vectors.""" with tf.variable_scope( # Use scopes like this. name, default_name="emb", values=[x], reuse=reuse): embedding_var = tf.get_variable( "kernel", [vocab_size, dense_size]) return tf.gather(embedding_var, x)
  • 57.
    Models are builtfrom Layers def bytenet(inputs, targets, hparams): final_encoder = common_layers.residual_dilated_conv( inputs, hparams.num_block_repeat, "SAME", "encoder", hparams) shifted_targets = common_layers.shift_left(targets) kernel = (hparams.kernel_height, hparams.kernel_width) decoder_start = common_layers.conv_block( tf.concat([final_encoder, shifted_targets], axis=3), hparams.hidden_size, [((1, 1), kernel)], padding="LEFT") return common_layers.residual_dilated_conv( decoder_start, hparams.num_block_repeat, "LEFT", "decoder", hparams)
  • 58.
    Training Utilities Training programtypically runs multiple threads ● Execute the training op in a loop. ● Checkpoint every so often. ● Gather summaries for the Visualizer. ● Other, eg. monitors Nans, costs, etc.
  • 59.
  • 60.
    So what dowe need? h = f(Wx + B) o = f(W’h + B’) l = -logp(o = y) P -= lr * dl/dP1. Operations like matmul, f done fast 2. Gradients symbolically for dl/dP 3. Specify W,W’ and keep track of them 4. Run it on a large scale TensorFlow view: h = tf.layers.dense(x, h_size, name=”h1”) o = tf.layers.dense(h, 1, name=”output”) l = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=o, labels=y) But data? Where do we get {x,y} from?
  • 61.
    Tensor2Tensor Tensor2Tensor (T2T) isa library of deep learning models and datasets designed to accelerate deep learning research and make it more accessible. ● Datasets: ImageNet, CIFAR, MNIST, Coco, WMT, LM1B, ... ● Models: ResNet, RevNet, ShakeShake, Xception, SliceNet, Transformer, ByteNet, Neural GPU, LSTM, ... ● Tools: cloud training, hyperparameter tuning, TPU, ...
  • 62.
    So what dowe need? h = f(Wx + B) o = f(W’h + B’) l = -logp(o = true) P -= lr * dl/dP1. Operations like matmul, f done fast 2. Gradients symbolically for dl/dP 3. Specify W,W’ and keep track of them 4. Run it on a large scale TensorFlow: goo.gl/njJftZ x, y = mnist.dataset h = tf.layers.dense(x, h_size, name=”h1”) o = tf.layers.dense(h, 1, name=”output”) l = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=o, labels=y)
  • 63.
    Play with thecolab goo.gl/njJftZ ● Try pure SGD instead of the Adam optimizer and others like AdaFactor ○ Find in tensorflow.org where is the API and how optimizers are called ○ Find the AdaFactor paper on arxiv and read it; use it from Tensor2Tensor ● Try other layer sizes and numbers of layers, other activation functions. ● Try running a few times, how does initialization affect results? ● Try running on Cifar10, how does your model perform? ● Make a convolutional model, is it better? (tf.layers.dense -> tf.layers.conv2d) ● Try residual connections through conv layers, check out shake-shake in T2T
  • 64.
  • 65.
    RNNs Everywhere Sequence toSequence Learning with Neural Networks
  • 66.
  • 67.
    Transformer Based on AttentionIs All You Need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin and other works with Samy Bengio, Eugene Brevdo, Francois Chollet, Stephan Gouws, Nal Kalchbrenner, Ofir Nachum, Aurko Roy, Ryan Sepassi.
  • 68.
  • 69.
  • 70.
    Dot-Product Attention def dot_product_attention(q,k, v, bias, dropout_rate=0.0, image_shapes=None, name=None, make_image_summary=True, save_weights_to=None, dropout_broadcast_dims=None): with tf.variable_scope( name, default_name="dot_product_attention", values=[q, k, v]) as scope: # [batch, num_heads, query_length, memory_length] logits = tf.matmul(q, k, transpose_b=True) if bias is not None: logits += bias weights = tf.nn.softmax(logits, name="attention_weights") if save_weights_to is not None: save_weights_to[scope.name] = weights # dropping out the attention links for each of the heads weights = common_layers.dropout_with_broadcast_dims( weights, 1.0 - dropout_rate, broadcast_dims=dropout_broadcast_dims) if expert_utils.should_generate_summaries() and make_image_summary: attention_image_summary(weights, image_shapes) return tf.matmul(weights, v)
  • 71.
    Ops Activations Attention (dot-prod)n2 · d n2 + n · d Attention (additive) n2 · d n2 · d Recurrent n · d2 n · d Convolutional n · d2 n · d n = sequence length d = depth k = kernel size
  • 72.
    What’s missing fromSelf-Attention? Convolution Self-Attention
  • 73.
    What’s missing fromSelf-Attention? Convolution Self-Attention ● Convolution: a different linear transformation for each relative position. Allows you to distinguish what information came from where. ● Self-Attention: a weighted average :(
  • 74.
    The Fix: Multi-HeadAttention Convolution Multi-Head Attention ● Multiple attention layers (heads) in parallel (shown by different colors) ● Each head uses different linear transformations. ● Different heads can learn different relationships.
  • 75.
  • 76.
  • 77.
    Ops Activations Multi-Head Attention withlinear transformations. For each of the h heads, dq = dk = dv = d/h n2 · d + n · d2 n2 · h + n · d Recurrent n · d2 n · d Convolutional n · d2 n · d n = sequence length d = depth k = kernel size
  • 78.
    Three ways ofattention Encoder-Decoder Attention Encoder Self-Attention MaskedDecoder Self-Attention
  • 79.
  • 80.
  • 81.
  • 82.
  • 83.
    Coreference resolution (Winogradschemas) Sentence Google Translate Transformer The cow ate the hay because it was delicious. La vache mangeait le foin parce qu'elle était délicieuse. La vache a mangé le foin parce qu'il était délicieux. The cow ate the hay because it was hungry. La vache mangeait le foin parce qu'elle avait faim. La vache mangeait le foin parce qu'elle avait faim. The women stopped drinking the wines because they were carcinogenic. Les femmes ont cessé de boire les vins parce qu'ils étaient cancérogènes. Les femmes ont cessé de boire les vins parce qu'ils étaient cancérigènes. The women stopped drinking the wines because they were pregnant. Les femmes ont cessé de boire les vins parce qu'ils étaient enceintes. Les femmes ont cessé de boire les vins parce qu'elles étaient enceintes. The city councilmen refused the female demonstrators a permit because they advocated violence. Les conseillers municipaux ont refusé aux femmes manifestantes un permis parce qu'ils préconisaient la violence. Le conseil municipal a refusé aux manifestantes un permis parce qu'elles prônaient la violence. The city councilmen refused the female demonstrators a permit because they feared violence. Les conseillers municipaux ont refusé aux femmes manifestantes un permis parce qu'ils craignaient la violence Le conseil municipal a refusé aux manifestantes un permis parce qu'elles craignaient la violence.*
  • 84.
    Long Text Generation Generatingentire Wikipedia articles by summarizing top search results and references. (Memory-Compressed Attn.)
  • 85.
    '''The Transformer''' area Japanese [[hardcore punk]] band. ==Early years== The band was formed in 1968, during the height of Japanese music history. Among the legendary [[Japanese people|Japanese]] composers of [Japanese lyrics], they prominently exemplified Motohiro Oda's especially tasty lyrics and psychedelic intention. Michio was a longtime member of the every Sunday night band PSM. His alluring was of such importance as being the man who ignored the already successful image and that he municipal makeup whose parents were&amp;nbsp;– the band was called Jenei.&lt;ref&gt;http://www.separatist.org/se_frontend/post-punk-musician-the-kidney.html&lt;/ref&gt; From a young age the band was very close, thus opting to pioneer what
  • 86.
    From a youngage the band was very close, thus opting to pioneer what had actually begun as a more manageable core hardcore punk band.&lt;ref&gt;http://www.talkradio.net/article/independent-music-fades-from-the-closed-drawings-out&lt;/ref&gt; ==History== ===Born from the heavy metal revolution=== In 1977 the self-proclaimed King of Tesponsors, [[Joe Lus: : It was somewhere... it was just a guile ... taking this song to Broadway. It was the first record I ever heard on A.M., After some opposition I received at the hands of Parsons, and in the follow-up notes myself.&lt;ref&gt;http://www.discogs.com/artist/The+Op%C5%8Dn+&amp;+Psalm&lt;/ref&gt; The band cut their first record album titled ''Transformed, furthered
  • 87.
    The band cuttheir first record album titled ''Transformed, furthered and extended Extended'',&lt;ref&gt;[https://www.discogs.com/album/69771 MC – Transformed EP (CDR) by The Moondrawn – EMI, 1994]&lt;/ref&gt; and in 1978 the official band line-up of the three-piece pop-punk-rock band TEEM. They generally played around [[Japan]], growing from the Top 40 standard. ===1981-2010: The band to break away=== On 1 January 1981 bassist Michio Kono, and the members of the original line-up emerged. Niji Fukune and his [[Head poet|Head]] band (now guitarist) Kazuya Kouda left the band in the hands of the band at the May 28, 1981, benefit season of [[Led Zeppelin]]'s Marmarin building. In June 1987, Kono joined the band as a full-time drummer, playing a
  • 88.
    few nights ina 4 or 5 hour stint with [[D-beat]]. Kono played through the mid-1950s, at Shinlie, continued to play concerts with drummers in Ibis, Cor, and a few at the Leo Somu Studio in Japan. In 1987, Kono recruited new bassist Michio Kono and drummer Ayaka Kurobe as drummer for band. Kono played trumpet with supplement music with Saint Etienne as a drummer. Over the next few years Kono played as drummer and would get many alumni news invitations to the bands' ''Toys Beach'' section. In 1999 he joined the [[CT-182]]. His successor was Barrie Bell on a cover of [[Jethro Tull (band)|Jethro Tull]]'s original 1967 hit &quot;Back Home&quot; (last appearance was in Jethro), with whom he shares a name. ===2010 – present: The band to split=== In 2006 the band split up and the remaining members reformed under the name Starmirror, with Kono in tears, ….
  • 89.
    '''''The Transformer''''' isa [[book]] by British [[illuminatist]] [[Herman Muirhead]], set in a post-apocalyptic world that border on a mysterious alien known as the &quot;Transformer Planet&quot; which is his trademark to save Earth. The book is about 25 years old, and it contains forty-one different demographic models of the human race, as in the cases of two fictional ''groups'',&amp;nbsp;''[[Robtobeau]]''&amp;nbsp;&quot;Richard&quot; and &quot;The Transformers Planet&quot;. == Summary == The book benefits on the [[3-D film|3-D film]], taking his one-third of the world's pure &quot;answer&quot; and gas age from 30 to 70 within its confines. The book covers the world of the world of [[Area 51|Binoculars]] from around the worlds of Earth. It is judged by the ability of [[telepathy|telepaths]] and [[television]], and provides color, line, and end-to-end observational work.
  • 90.
    and end-to-end observationalwork. To make the book up and document the recoverable quantum states of the universe, in order to inspire a generation that fantasy producing a tele-recording-offering machine is ideal. To make portions of this universe home, he recreates the rostrum obstacle-oriented framework Minou.&lt;ref&gt;http://www.rewunting.net/voir/BestatNew/2007/press/Story.html)&lt;/ref&gt; == ''The Transformer''== The book was the first on a [[Random Access Album|re-issue]] since its original version of ''[[Robtobeau]]'', despite the band naming itself a &quot;Transformer Planet&quot; in the book.&lt;ref name=prweb-the-1985&gt;{{cite web|url=http://www.prnewswire.co.uk/cgi/news/release?id=9010884|title=''The Transformer''|publisher=www.prnewswire.co.uk|date=|accessdate=2012-04-25}}&lt;/ref&gt; Today, &quot;[[The Transformers Planet]]&quot; is played entirely open-ended, there are more than just the four previously separate only bands. A number of its groups will live on one abandoned volcano in North America,
  • 91.
    ===Conceptual ''The Transformer''universe=== Principals a setting-man named “The Supercongo Planet,” who is a naturalistic device transferring voice and humour from ''The Transformer Planet,'' whose two vice-maks appear often in this universe existence, and what the project in general are trying to highlight many societal institutions. Because of the way that the corporation has made it, loneliness, confidence, research and renting out these universes are difficult to organise without the bands creating their own universe. The scientist is none other than a singer and musician. Power plants are not only problematic, but if they want programmed them to create and perform the world's first Broadcast of itself once the universe started, but deliberately Acta Biological Station, db.us and BB on ''The Transformer Planet'', ''The Transformer Planet'', aren't other things Scheduled for.
  • 92.
    :&lt;blockquote&gt;A man calledDick Latanii Bartow, known the greatest radio dot Wonderland administrator at influential arrangers in a craze over the complex World of Biological Predacial Engineer in Rodel bringing Earth into a 'sortjob' with fans. During this 'Socpurportedly Human', Conspiracy was being released to the world as Baron Maadia on planet Nature. A world-renowned scientist named Julia Samur is able to cosmouncish society and run for it - except us who is he and he is before talking this entire T100 before Cell physiologist Cygnets. Also, the hypnotic Mr. Mattei arrived, so it is Mischief who over-manages for himself - but a rising duplicate of Phil Rideout makes it almost affable. There is plenty of people at work to make use of it and animal allies out of politics. But Someday in 1964, when we were around, we were steadfast against the one man's machine and he did an amazing job at the toe of the mysterious... Mr. Suki who is an engineering desk lecturer at the University of}}}} …………….
  • 93.
    Image Generation Model Type% unrecognized (max = 50%) ResNet 4.0% Superresolution GAN (Garcia’16) 8.5% PixelRecursive (Dahl et al., 2017) 11% Image Transformer 36.9%
  • 94.
    How about GANs? (AreGANs Created Equal? A Large-Scale Study) Problem 1: Variance Problem 2: Even best models are not great: Image Transformer: 36.6
  • 95.
    Play with thecolab goo.gl/njJftZ ● Try a pre-trained Transformer on translation, see attentions. ● See https://jalammar.github.io/illustrated-transformer/ ● Add Transformer layer on the previous sequence tasks, try it. ● Try the non-deterministic sequence task: 50% copy / 50% repeat-even: ○ See that previous sequence model fails on unclear outputs ○ Add auto-regressive part and attention ○ See that the new model is 50% correct (best possible) ○ *Does it generalize less with attention? Why? What could be done?
  • 96.
    How do Iget it?
  • 97.
  • 98.
    Tensor2Tensor Tensor2Tensor (T2T) isa library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research. ● Datasets: ImageNet, CIFAR, MNIST, Coco, WMT, LM1B, ... ● Models: ResNet, RevNet, ShakeShake, Xception, SliceNet, Transformer, ByteNet, Neural GPU, LSTM, ...
  • 99.
  • 100.
    Tensor2Tensor Code (github) ●data_generators/ : datasets, must subclass Problem ● models/ : models, must subclass T2TModel ● utils/ , bin/ , etc. : utilities, binaries, cloud helpers, … pip install tensor2tensor && t2t-trainer --generate_data --data_dir=~/t2t_data --output_dir=~/t2t_train/mnist --problems=image_mnist --model=shake_shake --hparams_set=shake_shake_quick --train_steps=1000 --eval_steps=100
  • 101.
    Tensor2Tensor Applications pip installtensor2tensor && t2t-trainer --generate_data --data_dir=~/t2t_data --output_dir=~/t2t_train/dir --problems=$P --model=$M --hparams_set=$H ● Translation (state-of-the-art both on speed and accuracy): $P=translate_ende_wmt32k, $M=transformer, $H=transformer_big ● Image classification (CIFAR, also ImageNet): $P=image_cifar10, $M=shake_shake, $H=shakeshake_big ● Summarization (CNN): $P=summarize_cnn_dailymail32k, $M=transformer, $H=transformer_prepend ● Speech recognition (Librispeech): $P=librispeech, $M=transformer, $H=transformer_librispeech
  • 102.
    Why Tensor2Tensor? ● Noneed to reinvent ML. Best practices and SOTA models. ● Modularity helps. Easy to change models, hparams, data. ● Trains everywhere. Multi-GPU, distributed, Cloud, TPUs. ● Used by Google Brain. Papers, preferred for Cloud TPU LMs. ● Great active community! Find us on github, gitter, groups, ...
  • 103.
    Tensor2Tensor + CloudML Howdo I train a model on my data? See the Cloud ML poetry tutorial! ● How to hook up your data to the library of models. ● How to easily run on Cloud ML and use all features. ● How to tune the configuration of a model automatically Result: Even with 20K data examples it generates poetry!