Monday, February 24, 2020
Suzy Cube Update: March 23, 2018
Sunday, February 23, 2020
Tech Book Face Off: CoffeeScript Vs. Simplifying JavaScript
VS. |
CoffeeScript
This book was written a few years ago now, in early 2015, but CoffeeScript is still alive and kicking, especially for Ruby on Rails developers as the default front-end language of choice. CoffeeScript is integrated into Rails' asset pipeline, so it gets automatically transpiled to JavaScript and minified as part of the production release process. If you're already comfortable with JavaScript, and even more so if you know Ruby, then CoffeeScript is a breeze to learn.
The ease with which this language can be picked up is exemplified by the book, since it's one of the shortest books I've ever read on a programming language. Over half of the book has more to do with examples, applications, and other stuff tangential to CoffeeScript, rather than the language proper. The book itself is just short of 100 pages while the content on syntax and usage of the language is condensed into the first half of the book.
As all books like this do, the first chapter starts out with how to install the language and configure the environment. It's pretty straightforward stuff. Then, we get into all of the syntax changes that CoffeeScript brings to JavaScript, which essentially defines the language since all of the features are the same as JavaScript's. Chapter 2 shows how function and variable declarations are different, and much shorter. Chapter 3 demonstrates some nice syntactical sugar for arrays in the form of ranges, and iteration can be done more flexibly with for comprehensions. Chapter 4 gets into the syntax features for defining classes and doing inheritance concisely.
Most of the syntax will look quite familiar to Rubyists, including class instance variables denoted with an '@' prefix, the string interpolation notation, unless conditionals, and array ranges. Here's an example from the book showing a number of the syntax features:
class Tribble
constructor: -> # class constructor definition
@isAlive = true # instance variable definition
Tribble.count += 1 # class variable access
breed: -> new Tribble if @isAlive
die: ->
return unless @isAlive
Tribble.count -= 1
@isAlive = false
@count: 0 # class variable (property)
@makeTrouble: -> console.log ('Trouble!' for i in [1..@count]).join(' ')
This code would be about twice as many lines in JavaScript, so the compression is pretty great and the code is much cleaner and easier to understand. Burnham proclaims these virtues of CoffeeScript early on in the book:Shorter code is easier to read, easier to write, and, perhaps most critically, easier to change. Gigantic heaps of code tend to lumber along, as any significant modifications require a Herculean effort. But bite-sized pieces of code can be revamped in a few swift keystrokes, encouraging a more agile, iterative development style.Maybe that's stated a bit more strongly than is warranted, but it's still hard to argue with the improved simplicity and cleanliness of CoffeeScript making developers' lives more pleasant.
The last three chapters of the book delve into different frameworks and packages in the JavaScript universe that can be used with CoffeeScript, and the vehicle for exploring these things is a (heavily) stripped down version of the Trello app. Chapter 5 goes through how to create the front-end portion of the app with jQuery and Backbone.js. Chapter 6 adds a backend server for the app with Node and Express. Chapter 7 explores how to test the app with Intern. All of the code for the front-end, backend, and tests is written in CoffeeScript, and the transpiling is setup to be managed with Grunt. It's nice to see multiple different examples of how to use CoffeeScript anywhere that JavaScript would normally be used, just to get an idea of how to transition to CoffeeScript in multiple ways.
Throughout the book, Burnham presents everything in a straightforward, no-frills manner. Everything is clear and logical, and his concise descriptions are part of the reason the book is so short. He assumes you already know JavaScript—which is much appreciated—and he doesn't go into extended explanations of JavaScripts features. It's just the facts on how CoffeeScript is different and what the syntax is for the features it compresses. It's awfully hard for me not to recommend this book simply because it's so short and to the point. It only took a few hours to read through, and now I know a better way to code JavaScript. There's not much more I can ask of a programming language book.
Simplifying JavaScript
Every language has those more advanced books that assume you already know the language and instead of covering the basics and syntax, it provides advice on how to write idiomatically in the language. I've read these books for C++, Ruby, and JavaScript and found them to be surprisingly enjoyable to read. That was not the case with this book, but before I get too far into criticisms, I should summarize what this book does well.
Simplifying JavaScript is organized into ten chapters with each chapter broken into a set of tips that total 51 tips in all. These tips each explain one new feature of the JavaScript language from the new ES5, ES6, and ES2017 specifications. Some features, like the spread operator take multiple tips to fully cover. Then, the last chapter covers some features of the JavaScript development environment, like npm, that are not part of the language and have been around a bit longer than these newer specifications.
Most of the new features significantly improve and simplify the language, and they include things like:
- new variable declaration keywords const and let
- string template literals, which look much like Ruby's string interpolation
- the spread operator ... for converting arrays to lists and converting lists of parameters to arrays
- the Map object
- the Set object
- new loop iterators such as map(), filter(), and reduce()
- default parameters
- object destructuring
- unnamed arrow functions
- partially applied functions and currying
- classes
- promises and async/await
Morgan does a nice job introducing and justifying the new features at times:
We spend so much time thinking and teaching complex concepts, but something as simple as variable declaration will affect your life and the lives of other developers in a much more significant way.This is so true. The code we're reading and writing every day has hundreds of variable declarations and usages, and being able to indicate intent in those declarations makes code much cleaner and more understandable. Getting better at the fundamentals of the language and having these new declarations available so that the most common code is clear and purposeful will more significantly improve code than all of the complicated, esoteric features that only get used once in a blue moon.
These exciting new features and simple explanations were the good parts, so why did I end up not liking this book much? Mostly, it was because of how long-winded the explanations were. Each tip dragged on for what felt like twice as long as it needed to, and the book could have easily been half as long. CoffeeScript showed how to present language features in a clear, concise way. This book took the opposite approach. Then, to make matters worse, it was written in the second person with the author always referring directly to the reader with you this and you that. Normally I don't mind a few references to you, the reader, every now and then, but this was constant so it became constantly aggravating.
Beyond the writing style, some of the justifications for various features didn't hold much water. For example, when trying to rationalize the new variable declarations, Morgan presented an example of code where the variables are declared at the top, and then there are a hundred lines of code before those variables are used again. Then he says, "Ignore the fact that a block of code shouldn't be 100 lines long; you have a large amount of code where lots of changes are occurring." I don't know about you, but I wouldn't declare a variable and then not use it for a hundred lines. I would declare it right before use. He shouldn't have to contrive a bad example like that to justify the new const and let declarations. The improved ability to relate intent in the code should be reason enough.
In another example for why one must be careful when testing for truthy values in a conditional, he shows some code that would fail because a value of 0 is falsey:
const sections = ['shipping'];
function displayShipping(sections) {
if (sections.indexOf('shipping')) {
return true;
} else {
return false;
}
}
Ignoring the fact that I just cringe at code like this that returns a boolean value that should be computed directly instead of selected through an if statement, (don't worry, he corrects that later) there is much more wrong with this code than just the fact that an index of 0 will incorrectly hit the else branch. In fact, that is the only case that hits the else branch. Whenever 'shipping' is missing from sections, indexOf() will return -1, which is truthy! This code is just totally broken, even for an example that's supposed to show a certain kind of bug, which it does almost by accident.Other explanations were somewhat lacking in clarity. Late in the book, when things start to get complicated with promises, the explanations seem to get much more brief and gloss over how promises actually work mechanically and how the code executes. After having things explained in excruciating detail and in overly simplistic terms, I was surprised at how little explanation was given for promises. A step-by-step walk through of how the code runs when a promise executes would have been quite helpful in understanding that feature better. I figured it out, but through no fault of the book.
Overall, it was a disappointing read, and didn't at all live up to my expectations built up from similar books. The tone of the book was meant more for a beginner while the content was geared toward an intermediate to expert programmer. While learning about the new features of JavaScript was great, and there are plenty of new features to get excited about, there must be a better way to learn about them. At least it was a quick read, and refreshing my memory will be easy by skimming the titles of the tips. I wouldn't recommend Simplifying JavaScript to anyone looking to come up to speed on modern JavaScript. There are better options out there.
Thursday, February 20, 2020
Bulls & Bears Board Game On Kickstarter
Bulls and Bears is a financial board game from 1988. The first version of the game sold over 10,000 copies. I have talked with the designer on a few occasions and I may be helping him to turn the game into something more Euro-style. In the meantime, they are kickstarting a new version of the original game, called Trumponomics vs. Bernienomics (yeah, okay).
It has the usual roll-and-move mechanics, with a few clever additions that make it better than other games of its type.
For one thing, trivia questions are informative about stock market topics, but rather than simply answer them, you react to events by putting your money down or taking actions, after which the result of the event (what went up or down in the world as a result of the event) is revealed, and you gain or lose by it. As such, the mechanics of learning about the stock market are integrated right into the play, which makes for both better learning and a better play experience.
Unlike Monopoly, the game ends and is won when someone reaches $200k, which means less direct competition and a game that doesn't drag on. There is already a Euroish feel in that you are building a kind of economic engine through your investments.
Topics include financial markets, commodities, insurance, housing, mortgage, retirement, and so forth. The designer has a PhD and worked at the world bank for many years. Their website includes online play and guidebooks for educators.
This new edition is a roll and move game, like the original, which is what it is. But if you like non-gamer games, and you actually want to learn real financial information with a game that's actually fun to play, this may be your thing.
Buds, Blooms, And Thorns Review Of Island Hopper By Eagle-Gryphon Games
DisclaimerSupport me on Patreon!
Vitals:
Title: Island Hopper
Designed by: Scott Almes
Publisher: Eagle Gryphon Games
MSRP: $50
2-6p | 30-45 min | 8+
Introduction:
Each day in Island Hopper, players auction off the Captain's seat; the player who becomes the Captain is in charge of flying the plane for the day, but cannot make any deliveries of their own. To make their deliveries, the other players bribe the Captain to fly to the islands to which they need to go, thereby earning themselves cash. When it's time for the Captain to fly, the Captain must close his eyes, pick up their goods tokens, and attempt to land them in an island's harbor. A successful landing means that players can fulfill their contracts and the captain collects his bribe — but if the goods splash into the sea, you might find yourself under water...
—description from the publisher
Blooms are the game's highlights and features. Elements that are exceptional.
- A silly combination of dexterity and social interaction with a fun bidding mechanic.
- There are fun, strategic choices to make.
- The artwork by Kwanchai Moriya is whimsical and the components are top notch.
- There are lots of moments for laughs, as long as you don't take the game too seriously.
Buds are interesting parts of the game I would like to explore more.
- This game isn't for everyone, but if you like social interaction and light strategy, give it a shot!
- Having limited chances to say a single word has pros and cons. On one hand, it prevents people from just shouting things out randomly, on the other hand, with certain players it results in no one saying anything. It might be fun to try playing without the direction tokens and let the pilot have to figure out directions from a cacophony of different instructions.
Thorns are a game's shortcomings and any issues I feel are noteworthy.
- The game is simple enough, but the rulebook could have used a few more runs by a proofreader. There are a few typos and phrases that seem to be left over from the prototype (referring to coins as cubes, for example), the terms 'round' and 'phase' are used interchangeably, and there are a few details that seem to be missing (like what triggers the end game).
- We found players hands tend to either rise from the table or drop closer to the table, so some players tend to drop the goods from higher up, resulting in more bounces and less successful landings, while others are almost placing the goods right onto the islands. 2-3 inches is an ideal height, but it's difficult for everyone to be consistent.
- While the art throughout the game is pretty nice, the coins are super generic. They're functional, but about as plain as could be.
- There is a very high amount of luck in the game, particularly for what contracts and passengers are available to draw.
- Missing from the rules is what the ruling should be (success or not) if a good is on an island and coins. Not touching the table, but definitely supported by the coins surrounding the island. We've been playing that the coins become an extension of the island (thus making it even more attractive to try to fly to), but there's no discussion of this in the rules at all.
This is a great game to play with the family, particularly the 8-15 age range. I think the 12+ age limit is quite a bit higher than necessary - there are no complex mechanics or concepts. Probably the limiting factor is how far across the table the players can reach since the islands can be spread out a bit. I think if you like games like Colt Express, Junk Art, or similar light, silly games, Island Hopper might be a good choice for you!
Tech Book Face Off: Seven Concurrency Models In Seven Weeks Vs. CUDA By Example
VS. |
Seven Concurrency Models in Seven Weeks
I had previously enjoyed reading three other Seven in Seven Weeks books so I figured this one was an obvious choice for a solid book on concurrency, and that hunch held true. Butcher gives an excellent tour of the current state of concurrency and parallelism in the software development world, and he does it with a compelling story that builds up from the foundations of concurrency to the modern state-of-the-art services available for Big Data processing, at least circa 2014.The main rationale for paying more attention to concurrency and parallelism is that that is where the hardware is taking us. As Butcher argues in the introduction:
The primary driver behind this resurgence of interest is what's become known as the "multicore crisis." Moore's law continues to deliver more transistors per chip, but instead of those transistors being used to make a single CPU faster, we're seeing computers with more and more cores.
As Herb Sutter said, "The free lunch is over." You can no longer make your code run faster by simply waiting for faster hardware. These days if you need more performance, you need to exploit multiple cores, and that means exploiting parallelism.So if we're going to take advantage of all of these multiplying cores, we'd better figure out how to handle doing multiple things at once in our programs.
Our concurrency story begins with the little things. The first week focuses on the fundamentals of concurrency: threads and locks. Each week is split into three days, each day building on the day before, with the intention of being able to learn and experiment with the chapter's contents over a weekend. This first week on threads and locks is not meant to show the reader how to do modern parallel programming with threads, but to give a foundation of understanding for the higher-level concepts that come later. Threads are notoriously difficult to use without corrupting program state and crashing programs, and locks are a necessary evil that can help solve those corruption problems but have problems of their own, like deadlocks and livelocks. These problems are especially insidious because they're most often invisible, as Butcher warns:
To my mind, what makes multithreaded programming difficult is not that writing it is hard, but that testing it is hard. It's not the pitfalls that you can fall into; it's the fact that you don't necessarily know whether you've fallen into one of them.The first concurrency model gives us a view into that abyss, but then pulls back and moves on to better alternatives right away. The first better model turns out to be an old programming paradigm that has recently become more and more popular: functional programming. One of the biggest problems with programming languages like C or Java is that they have mutable state. That means most of their data structures and variables can and do change by default. Functional languages, on the other hand, default to immutable data structures that don't have the same problems when sharing state across threads.
The next model goes into detail about how one functional language, Clojure, uses the basic advantages of immutable state by separating identity and state. The identity of a data structure is what that data structure is inherently, like a list of names. It doesn't change. The state, which specific names are in the list, can change over time, and a persistent data structure in Clojure will guarantee that if the state changes for one thread, it will not change for other threads unless that state is explicitly passed from one thread to another. This separation of identity and state is accomplished by atoms and agents, but we don't have time to get into the specifics here. It's in the book.
After Clojure, we move on to Elixir, another functional language that takes a different approach to parallelism. Instead of threads, Elixir has extremely lightweight processes that can be used to make highly reliable applications out of unreliable components. The perspective to take when programming in Elixir is to design the application so that individual processes are not critical and can fail. Then instead of trying to do thorough error checking, we can just let them crash and depend on the system to recover and restart them. This approach makes for incredibly reliable systems, and with Elixir running on the Erlang VM, it has a solid foundation for bulletproof systems.
With the next model, we come back to Clojure to explore communicating sequential processes (CSP). Instead of making the endpoints in a message the important thing, CSP concentrates on the communication channel between the endpoints. In Clojure this is implemented with Go Blocks, and it's an intriguing change to the normal way of thinking about message passing between threads or processes.
What are we at now, the sixth model? This model steps outside of the CPU and takes a look at the other supercomputer in your PC, the massively parallel GPU. This chapter was a little too short for the subject to get a great understanding of what was going on, but it does use OpenCL for some simple word-counting applications that run on the GPU. It was neat to see how it works, but it was a lot of boilerplate code that was pretty opaque to me. I'm hoping the other book in this face-off will shed much more light on how to do GPU programming.
The final model takes us into the stratosphere with serious Big Data processing using Hadoop and Storm, frameworks that enable massively parallel data processing on large compute clusters. It was surprising to see how little code was needed to get a program up and running on such an industrial strength framework. Granted, the program was a simple one, but thinking about what the framework accomplishes is pretty intense.
That brings us to the end of the tour of concurrency models. The breadth of topics covered was exceptional, and the book flowed quite nicely. Butcher's explanations were clear, and he did an excellent job covering a wide-ranging, complex topic in a concise 300 pages. If you're looking for an overview of what's out there today in the way of concurrent and parallel programming, this is definitely the book to start you on that journey.
CUDA by Example
The first chapter starts out with a bit of history on the graphics processing unit (GPU) and why we would need a general-purpose platform such as CUDA for doing computations on it. The short answer is that the prior situation was dire. The longer answer is as follows:
The general approach in the early days of GPU computing was extraordinarily convoluted. Because standard graphics APIs such as OpenGL and DirectX were still the only way to interact with a GPU, any attempt to perform arbitrary computations on a GPU would still be subject to the constraints of programming within a graphics API. Because of this, researchers explored general-purpose computation through graphics APIs by trying to make their problems appear to the GPU to be traditional rendering.Suffice it to say, people were not particularly satisfied shoehorning their algorithms into the GPU through graphics programming, so CUDA and OpenCL were a welcome development.
The next chapter goes through how to get everything ready on your computer in order to start writing and running CUDA code, and the chapter after that finally unveils the first program to run on the GPU. It's not exciting, just the standard "Hello, World!" program, but this example does introduce some of the special syntax and keywords that are used in CUDA programming.
Chapter 4 is where the real fun begins. We get to run an honest-to-goodness parallel program on the GPU. It's still simple in that it's only summing two vectors together element by element, but it's doing the calculation with each pair of elements in its own thread. Each thread gets assigned to its own resource on the GPU, so theoretically, if the GPU had at least as many compute resources as there are pairs of elements, all of the additions would happen simultaneously. It may not seem quite right to use compute resources in this way since we're so used to programming on much more serial CPUs, but the GPU hardware is designed specifically to do thousands of small calculations in parallel in a highly efficient manner. It's definitely a programming paradigm shift.
After another more interesting example of calculating and displaying the Julia Set, a kind of fractal set of numbers, the next chapter follows up with how to synchronize these thousands of threads in calculations that aren't completely parallel. The example here is the dot product calculation, and this example ends up getting used multiple times throughout the rest of the book. So far the examples have been unique, but they'll start to get reused from here on, partly in order to not need to keep introducing more new algorithms for each example.
The next couple chapters discuss the different types of memory available in a GPU. A small amount of constant memory is there to hold values that are, well, constant, for fast access instead of needing to keep fetching those unchanging values from main memory or having them fill up the cache unnecessarily. Then there's texture memory available for optimized 2-D memory accesses, which are common in certain algorithms that operate on neighboring memory locations in two dimensions instead of the normal one dimension of vector calculations.
Chapter 8 discusses how to combine the use of the GPU as both a CUDA processor and a graphics processor without needing to copy buffers back and forth to the host memory. Actually, a lot of CUDA programming is optimized by thinking about how best to use the memory resources available. There are now at least three more memories to consider: the GPU main memory, constant memory, and texture memory, in addition to the normal system memory attached to the CPU we're used to thinking about. The options have multiplied, and it's important to use both the CPU and GPU efficiently to get the best performance.
We're nearing the end now, with chapters on using atomics to maintain memory consistency when multiple threads are accessing the same locations, using streams to more fully utilize a GPU's resources, and using multiple GPUs to their full potential, if your system is blessed with more than one GPU. By this point much of the content is starting to feel redundant, with incremental features being added to the mix and most of the examples and explanations of the code being copies of previous examples with minor tweaks for the new features.
The last chapter is a review of what was covered in the book, some recommendations of more resources to learn from, and a quick tour of the debugging tools available for CUDA. While overall this book was fairly good for learning how to do massively parallel programming with CUDA, and I certainly enjoyed coming up to speed with this exciting and powerful technology, the second half of the book especially felt drawn out and repetitive. The explanations got to be too verbose, and frankly, the cringe-worthy sense of humor couldn't carry the redundancy through. The book could have easily been half as long without losing much, although the pace was certainly easy to keep up with. I never struggled to understand anything, and that's always a plus. I've got a couple other CUDA books that may be better, but CUDA by Example is sufficient to learn the ropes in a pinch.
Of these two books, Seven Concurrency Models in Seven Weeks was the more wide-ranging and enlightening book. It gave a wonderful overview of the landscape for concurrent and parallel programming, even though it couldn't go into enough depth on any one topic to do it justice or allow the reader to competently start working in that area. Like all of the Seven in Seven books, its purpose is not to make the reader an expert, but to provide enough information to give the reader a fighting chance at making their own decision on a path. Then, the reader can follow that path further with a more specialized book. CUDA by Example is one such specialized book, although it was somewhat light on the real details of GPU programming. As an introductory book, it was adequate, but I'm hoping the next couple of books I read on GPU programming will have more substance. In any case parallel programming is growing in importance, and it's exciting to be able to play around with it on consumer-grade hardware today.
Wednesday, February 19, 2020
Spy World
There are good reasons why the Cold War (the period between the end of WWII and the fall of the Soviet Union in 1991) is such a compelling genre for stories and games. The massive distrust between America and Russia combined with a fear of all-out war, plus the rapidly advancing but still largely analog stealth and surveillance technology made it a unique time in world history, both thrilling and terrifying.
Covert, designed by Kane Klenko and published by Renegade Game Studios, does a good job of evoking the cold war with what on first glance seems to be a disparate group of game mechanics. The goal of the game is to collect sets of cards representing spy gear like listening devices, travel documents and hidden escape kits, and then turn those cards in for points, but as usual there's a bit more to it than that. The sets players are going for are determined by mission cards that will sometimes also include the need to have a pawn in a particular spot on the board.
Movement around the board is also one of the principal ways of gaining more cards, either by having a pawn in a certain part of the board when it's time to draw cards, or by following other players around and collecting clue tokens that they leave behind as the move from city to city on a board representing Europe during the Cold War.
But that's not even the most interesting part of the game. Player actions are determined by a dice placement system where at the start of each round, each player rolls 5 dice and uses them to determine what of 6 possible actions they'll be able to do that turn. The first player to choose a particular action can do so freely by placing a die showing any number on that part of the board, but the next player who wants to be able to do the same action has to place a die that comes either before or after the dice that are always there. For example, if your opponent plays a 2 on the "draw a card" action, you have to play either a 1 or a 3 in order to also do that action.
If the numbers just aren't cooperating and there's nowhere on the board you can play, you can spend a die to draw a random token that gives you a one-time special ability, such as being able to switch the number on a die or play two in a row. You can also just end your turn early, which guarantees that you'll get to go first on the next turn.
On top of all that, there's a code-breaking phase consisting of two rows of random numbers, and a deck of equipment cards with 3-digit codes on them. If you can manipulate the numbers so that any 3 of them match the code on your card, you can either use that card as part of a set of equipment (the main way of scoring points), or cash it in for bonus points at the end of the game.
Covert is be complex without being complicated, and it manages to make all of its different abstract game mechanics work together well, and also feel like they're intrinsic to the theme. When I play this game I really feel like I'm getting a sense of what it must have been like for those Cold War spymasters, managing tons of moving parts, manipulating events to work out in their favor, and taking advantage of situations they might not have control over.
Rating: 4 (out of 5) A very entertaining game that combines theme and mechanics well.
SuperTuxKart 1.0 Release
It's been a long and winding road for mascot racer SuperTuxKart, but after more than ten years of continuous and dedicated progress, the team has finally announced the release of build 1.0, marking an important development milestone for the project.
As quoted from the official release post:
Yes, if you have followed our development a bit, that might be a bit of a surprise. But we have been asked why we don't call this release 1.0, and the majority of us developers discussed this and decided that indeed this release is a major milestone that deserves the big 1.0 number.
Indeed a nice surprise and definitely a big step forward with the inclusion of online multiplayer!
See more new features in the official release video:
As usual you can download the game here. Also don't forget to head over to our forums to provide some feedback to the developers.
Code License: GPLv3
Assets License: CC BY-NC-SA 3.0
This post was retrieved from freegamer.blogspot.com.
Thursday, February 13, 2020
Brave Browser the Best privacy-focused Browser of 2020
Out of all the privacy-focused products and apps available on the market, Brave has been voted the best. Other winners of Product Hunt's Golden Kitty awards showed that there was a huge interest in privacy-enhancing products and apps such as chats, maps, and other collaboration tools.
An extremely productive year for Brave
Last year has been a pivotal one for the crypto industry, but few companies managed to see the kind of success Brave did. Almost every day of the year has been packed witch action, as the company managed to officially launch its browser, get its Basic Attention Token out, and onboard hundreds of thousands of verified publishers on its rewards platform.
Luckily, the effort Brave has been putting into its product hasn't gone unnoticed.
The company's revolutionary browser has been voted the best privacy-focused product of 2019, for which it received a Golden Kitty award. The awards, hosted by Product Hunt, were given to the most popular products across 23 different product categories.
Ryan Hoover, the founder of Product Hunt said:
"Our annual Golden Kitty awards celebrate all the great products that makers have launched throughout the year"
Brave's win is important for the company—with this year seeing the most user votes ever, it's a clear indicator of the browser's rapidly rising popularity.
Privacy and blockchain are the strongest forces in tech right now
If reaching 10 million monthly active users in December was Brave's crown achievement, then the Product Hunt award was the cherry on top.
The recognition Brave got from Product Hunt users shows that a market for privacy-focused apps is thriving. All of the apps and products that got a Golden Kitty award from Product Hunt users focused heavily on data protection. Everything from automatic investment apps and remote collaboration tools to smart home products emphasized their privacy.
AI and machine learning rose as another note-worthy trend, but blockchain seemed to be the most dominating force in app development. Blockchain-based messaging apps and maps were hugely popular with Product Hunt users, who seem to value innovation and security.
For those users, Brave is a perfect platform. The company's research and development team has recently debuted its privacy-preserving distributed VPN, which could potentially bring even more security to the user than its already existing Tor extension.
Brave's effort to revolutionize the advertising industry has also been recognized by some of the biggest names in publishing—major publications such as The Washington Post, The Guardian, NDTV, NPR, and Qz have all joined the platform. Some of the highest-ranking websites in the world, including Wikipedia, WikiHow, Vimeo, Internet Archive, and DuckDuckGo, are also among Brave's 390,000 verified publishers.