Memory Limitations of FPGA

Status
Not open for further replies.
what does your multi layer perceptron do??....i am not undermining FPGA...i am a big fan of fpga's in database related data how do you store it in a normal fpga...
 

sreevenkjan do you turn your brain on before you type or just bang on the keys and pray that something sensible comes out?
 

sreevenkjan do you turn your brain on before you type or just bang on the keys and pray that something sensible comes out?

@BlackHelicopter did you even read my messages before you come up with such inappropriate dumb comments....you need to come up with better reasons than saying someone insensible...it makes u sound foolish....btw this is not the forum to fight...
 

what does your multi layer perceptron do??....i am not undermining FPGA...i am a big fan of fpga's in database related data how do you store it in a normal fpga...

Nothing fancy. Boringly simple classification on a small toy project. (*) And I promise you, I use exactly zero databases for that little project. Unless you want to get pedantic and count the subversion repo. Ah damnit, and the database of my wiki for project notes. Okay, so I used two databases in relation to that little project. Well, and maybe an authentication database, but no others, really.

(*) there's your tie in for pointing out fpga is not suitable for big projects (**).

(**) there's my tie in to point out any such pointing out would be silly. You will have to stream the data, just like with many other problems. And yes, memory bandwidth will definitely be a big-ass bottleneck. But so far this has been prohibitive in terms of development time because I don't get paid for that. But if you were to throw a manyear at it in the form of several vhdl/verilog bashing monkeys I am reasonably confident that you can get good performance. Damn, right now I don't even have the budget to try and find out if I am wrong. XD

But for today I will do something simple and relaxing in the ongoing quest to reduce my analog n00bness. Like burning up a few TIP142's to try a DIY psu design.
 

Many of the ML algorithms can be trained one (or a few) test samples at a time. The result is a set of parameters that can then be used to classify samples one (or a few) at a time. These don't have a large memory requirement. Nonparametric estimators (eg, k-nearest) do have potentially large memory requirements.

I assuming "database" is a mistranslation from a description of ML methods being "data based", or "data driven".
 

Status
Not open for further replies.

Similar threads

Cookies are required to use this site. You must accept them to continue using the site. Learn more…