Looks primarily at changing object shapes, introducing the move too and the 2-point arch tool. Using double click for repetition of push/pull tool also proved to be convenient. We then used the move tool to alter slopes of surfaces, including using the up key to match slope and then height of another surface.
Next up is the arc tool, which has 4 variants:
Arc – Main point of this method determines where the center point of the arc will be
2 Point Arc – select two points that will be the width of the arc
3 Point Arc – Firts 2 points determine form, and the third point gives that exact length Ideal for irregularly shaped objects
Again the first pass took a while and was quite difficult, but a complete redraw took only 5 mins. When drawing structures like this, with eves and and sloped roofs it is important to complete a room (minus the eves and roof thickness) to make slope matching easier.
# Note that is is executing arbitrary code from an the vim-go repo
curl -fLo ~/.vim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
git clone https://github.com/fatih/vim-go.git ~/.vim/plugged/vim-go
Customise ~/.vimrc to enable and configure your plugins and shortcut keys
Once th ~/.vimrc is added run :GoInstallBinaries to get vim-go’s dependencies
Shortcut keys in this vimrc:
\ + b -> build
if errors occur toggle forward and back through them with ctrl + n and ctrl + m
close quick fix dialogue boxes with \ + a
\ + i -> install
dif (whilst on func def, delete all contents for func)
Autocompletion sucks though 🙁 so adding neocomplete is a must).
With existing versions of brew installed vim and the introduced dependency of xcode makes the setup time high. I went through this in the past and after a fairly long hiatus from writing code if find nothing is working quite right.
Step 4: Write an interface between the functions and a basic interface (text?)
Step 5: Test!
Conclusion: Using Scala (trying anyway) the implementation is not very complex. What is more difficult is replicating the original machines where the output character could not equal the input character. The limitation Turing used to crack the machines. Might see if I can implement that later then test out the cracking method. Not sure how easy it would be to break the current implementation…
After hearing the praises of Hadoop I had a brief look it and the Map/Reduce paradigm. Most info was read from Wikipedia and references in wiki articles. In particular, this paper from Jeffrey Dean and Sanjay Ghemawat.
Hadoop is an opensource software framework that implements MapReduce and Google File System. The aim is to enable easy and robust deployment of highly parallel data set processing programs. It is important to note that the MapReduce model is applicable to embarrassingly parallel problems. Processing can occur on data that is stored in a database or filesystem.
Map/Reduce refers to the steps in the model:
Map: A master node takes an input problem and divides it into sub-problems, passing them to worker nodes. Worker nodes can further divide sub-problems. For example, in the problem of counting word occurrence in a document the Map function will output a key/value pair every time it sees a a specified work – ie: (“searchterm”, 1).
Reduce: The reduce function takes the list of word(key)/values and sums the occurrences:
The output of reduce in this case could be: (foo, 3).
The MapReduce model becomes powerful when considering giant datasets can be processed in large clusters very efficiently using this model.
Hadoop Run-time Takes care of:
Partitioning of input data
Scheduling programs execution across machines
Handling machine failures
Managing inter-machine communication
Hadoop aims to enable developers with little distributed programming experience to utilize compute resources such as EC2.
With the emergence of ‘big data’ and the apparent value that can be extrapolated from massive databases/datastores, many organisations have found the limits of traditional relational databases. Hadoop has such a big buzz because it can pass the processing boundaries of relational database software and enable the extrapolation of value. The video below is a decent explanation of this point by data scientists at SalesForce.com