<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<channel>
		<title><![CDATA[Forum PipFlow - Neural Networks]]></title>
		<link>https://pipflow.com/forum/</link>
		<description><![CDATA[Forum PipFlow - https://pipflow.com/forum]]></description>
		<pubDate>Tue, 05 May 2026 02:10:13 +0000</pubDate>
		<generator>MyBB</generator>
		<item>
			<title><![CDATA[Machine learning with FPGA]]></title>
			<link>https://pipflow.com/forum/Thread-Machine-learning-with-FPGA</link>
			<pubDate>Tue, 10 Oct 2017 21:42:29 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://pipflow.com/forum/member.php?action=profile&uid=3">waldo</a>]]></dc:creator>
			<guid isPermaLink="false">https://pipflow.com/forum/Thread-Machine-learning-with-FPGA</guid>
			<description><![CDATA[Intel FPGA conference:<br />
<br />
  <a href="https://www.youtube.com/watch?v=3iCifD8gZ0Q" target="_blank" rel="noopener" class="mycode_url">https://www.youtube.com/watch?v=3iCifD8gZ0Q</a>]]></description>
			<content:encoded><![CDATA[Intel FPGA conference:<br />
<br />
  <a href="https://www.youtube.com/watch?v=3iCifD8gZ0Q" target="_blank" rel="noopener" class="mycode_url">https://www.youtube.com/watch?v=3iCifD8gZ0Q</a>]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[PODRIA SER MUYYY INTERESANTE]]></title>
			<link>https://pipflow.com/forum/Thread-PODRIA-SER-MUYYY-INTERESANTE</link>
			<pubDate>Fri, 15 Sep 2017 20:54:22 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://pipflow.com/forum/member.php?action=profile&uid=359">jenrique42</a>]]></dc:creator>
			<guid isPermaLink="false">https://pipflow.com/forum/Thread-PODRIA-SER-MUYYY-INTERESANTE</guid>
			<description><![CDATA[UN GRAN HALLAZGO LA PAGINA Q ME HE ENCONTRADO<br />
<br />
<a href="https://nips.cc/Conferences/2017" target="_blank" rel="noopener" class="mycode_url">https://nips.cc/Conferences/2017</a>]]></description>
			<content:encoded><![CDATA[UN GRAN HALLAZGO LA PAGINA Q ME HE ENCONTRADO<br />
<br />
<a href="https://nips.cc/Conferences/2017" target="_blank" rel="noopener" class="mycode_url">https://nips.cc/Conferences/2017</a>]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Deep Neural Networks Ebook]]></title>
			<link>https://pipflow.com/forum/Thread-Deep-Neural-Networks-Ebook</link>
			<pubDate>Thu, 18 May 2017 20:13:40 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://pipflow.com/forum/member.php?action=profile&uid=3">waldo</a>]]></dc:creator>
			<guid isPermaLink="false">https://pipflow.com/forum/Thread-Deep-Neural-Networks-Ebook</guid>
			<description><![CDATA[Table of Contents<br />
Acknowledgements<br />
Notation<br />
<br />
1 Introduction<br />
Part I: Applied Math and Machine Learning Basics<br />
2 Linear Algebra<br />
3 Probability and Information Theory<br />
4 Numerical Computation<br />
5 Machine Learning Basics<br />
<br />
Part II: Modern Practical Deep Networks<br />
6 Deep Feedforward Networks<br />
7 Regularization for Deep Learning<br />
8 Optimization for Training Deep Models<br />
9 Convolutional Networks<br />
10 Sequence Modeling: Recurrent and Recursive Nets<br />
11 Practical Methodology<br />
12 Applications<br />
<br />
<br />
Part III: Deep Learning Research<br />
13 Linear Factor Models<br />
14 Autoencoders<br />
15 Representation Learning<br />
16 Structured Probabilistic Models for Deep Learning<br />
17 Monte Carlo Methods<br />
18 Confronting the Partition Function<br />
19 Approximate Inference<br />
20 Deep Generative Models<br />
<br />
Bibliography<br />
Index<br />
<br />
<a href="http://www.deeplearningbook.org" target="_blank" rel="noopener" class="mycode_url">http://www.deeplearningbook.org</a>]]></description>
			<content:encoded><![CDATA[Table of Contents<br />
Acknowledgements<br />
Notation<br />
<br />
1 Introduction<br />
Part I: Applied Math and Machine Learning Basics<br />
2 Linear Algebra<br />
3 Probability and Information Theory<br />
4 Numerical Computation<br />
5 Machine Learning Basics<br />
<br />
Part II: Modern Practical Deep Networks<br />
6 Deep Feedforward Networks<br />
7 Regularization for Deep Learning<br />
8 Optimization for Training Deep Models<br />
9 Convolutional Networks<br />
10 Sequence Modeling: Recurrent and Recursive Nets<br />
11 Practical Methodology<br />
12 Applications<br />
<br />
<br />
Part III: Deep Learning Research<br />
13 Linear Factor Models<br />
14 Autoencoders<br />
15 Representation Learning<br />
16 Structured Probabilistic Models for Deep Learning<br />
17 Monte Carlo Methods<br />
18 Confronting the Partition Function<br />
19 Approximate Inference<br />
20 Deep Generative Models<br />
<br />
Bibliography<br />
Index<br />
<br />
<a href="http://www.deeplearningbook.org" target="_blank" rel="noopener" class="mycode_url">http://www.deeplearningbook.org</a>]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Differentiable Neural Computers (Neural Networks with external memory)]]></title>
			<link>https://pipflow.com/forum/Thread-Differentiable-Neural-Computers-Neural-Networks-with-external-memory</link>
			<pubDate>Wed, 12 Apr 2017 06:52:02 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://pipflow.com/forum/member.php?action=profile&uid=3">waldo</a>]]></dc:creator>
			<guid isPermaLink="false">https://pipflow.com/forum/Thread-Differentiable-Neural-Computers-Neural-Networks-with-external-memory</guid>
			<description><![CDATA[Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, <br />
but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to <br />
the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer <br />
(DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the <br />
random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent <br />
and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained <br />
with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate <br />
reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest <br />
path between specified points and inferring the missing links in randomly generated graphs, and then generalize these <br />
tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC <br />
can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, <br />
our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural <br />
networks without external read–write memory.<br />
<br />
<a href="http://www.nature.com/articles/nature20101.epdf?author_access_token=ImTXBI8aWbYxYQ51Plys8NRgN0jAjWel9jnR3ZoTv0MggmpDmwljGswxVdeocYSurJ3hxupzWuRNeGvvXnoO8o4jTJcnAyhGuZzXJ1GEaD-Z7E6X_a9R-xqJ9TfJWBqz" target="_blank" rel="noopener" class="mycode_url">http://www.nature.com/articles/nature201...qJ9TfJWBqz</a>]]></description>
			<content:encoded><![CDATA[Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, <br />
but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to <br />
the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer <br />
(DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the <br />
random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent <br />
and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained <br />
with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate <br />
reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest <br />
path between specified points and inferring the missing links in randomly generated graphs, and then generalize these <br />
tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC <br />
can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, <br />
our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural <br />
networks without external read–write memory.<br />
<br />
<a href="http://www.nature.com/articles/nature20101.epdf?author_access_token=ImTXBI8aWbYxYQ51Plys8NRgN0jAjWel9jnR3ZoTv0MggmpDmwljGswxVdeocYSurJ3hxupzWuRNeGvvXnoO8o4jTJcnAyhGuZzXJ1GEaD-Z7E6X_a9R-xqJ9TfJWBqz" target="_blank" rel="noopener" class="mycode_url">http://www.nature.com/articles/nature201...qJ9TfJWBqz</a>]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Design and Implementation of Neural Network in FPGA Article]]></title>
			<link>https://pipflow.com/forum/Thread-Design-and-Implementation-of-Neural-Network-in-FPGA-Article</link>
			<pubDate>Tue, 11 Apr 2017 21:36:49 +0000</pubDate>
			<dc:creator><![CDATA[<a href="https://pipflow.com/forum/member.php?action=profile&uid=3">waldo</a>]]></dc:creator>
			<guid isPermaLink="false">https://pipflow.com/forum/Thread-Design-and-Implementation-of-Neural-Network-in-FPGA-Article</guid>
			<description><![CDATA[This paper constructs fully parallel NN hardware architecture, FPGA has been used to reduce neuron hardware by design the activation function inside the neuron without using<br />
lookup table as in most researches, to perform an efficient NN. It consist of two main parts; the first part covers network training using MATLAB program, the second part represents the<br />
hardware implementation of the trained network through Xilinx high performance Virtex2 FPGA schematic entry design tools.<br />
<br />
<img src="https://pipflow.com/forum/images/attachtypes/pdf.png" title="Adobe Acrobat PDF" border="0" alt=".pdf" />
&nbsp;&nbsp;<a href="attachment.php?aid=4" target="_blank" title="">neural network FPGA.pdf</a> (Tamaño: 696.85 KB / Descargas: 8)
]]></description>
			<content:encoded><![CDATA[This paper constructs fully parallel NN hardware architecture, FPGA has been used to reduce neuron hardware by design the activation function inside the neuron without using<br />
lookup table as in most researches, to perform an efficient NN. It consist of two main parts; the first part covers network training using MATLAB program, the second part represents the<br />
hardware implementation of the trained network through Xilinx high performance Virtex2 FPGA schematic entry design tools.<br />
<br />
<img src="https://pipflow.com/forum/images/attachtypes/pdf.png" title="Adobe Acrobat PDF" border="0" alt=".pdf" />
&nbsp;&nbsp;<a href="attachment.php?aid=4" target="_blank" title="">neural network FPGA.pdf</a> (Tamaño: 696.85 KB / Descargas: 8)
]]></content:encoded>
		</item>
	</channel>
</rss>