Skip to main content

Anagram Solver

I was coding out a simple string permuting function and I thought of writing out an AnagramSolver just for completion.

The Dictionary can be provided as a wordlist in the form of a text file with a word string per line. You can find several word lists here:

[code language="cpp"]


using namespace std;
class AnagramChecker
map<string, bool> Dictionary;
map<string, bool> ResultList;

//Recursive string permuter
void RecurveStrPerm(string Buffer, string Test, int Cur)
if (Cur >= Test.length())
if (Dictionary.count(Buffer) > 0)
if (ResultList.count(Buffer) == 0)
ResultList[Buffer] = true;

for(int i = 0; i <= Buffer.length(); i++)
Buffer.insert(i, 1, Test[Cur]);
RecurveStrPerm(Buffer, Test, Cur + 1);
Buffer.erase(i, 1);

//Build a table out of the strings
void BuildInMemDic()
ifstream DicReader;"WordList.txt");
string CurrentWord= "";
while (!DicReader.eof())
getline(DicReader, CurrentWord);
for (int i = 0; i < CurrentWord.length(); i++)
CurrentWord[i] = tolower(CurrentWord[i]);
Dictionary[CurrentWord] = true;


//Get Result
void GetResult()
cout << "\nAnagrams: \n";
for (map<string, bool>::iterator ResultListPtr = ResultList.begin(); ResultListPtr != ResultList.end(); ResultListPtr++)
cout << "\n" << ResultListPtr->first;




void Find(string Test)
int cur = 0, n = Test.length();
RecurveStrPerm("", Test, 0);


void main()
string Test = "Slate";
cout << "\nBuilding In memory Dictionary...";
AnagramChecker AnaObj;
cout << "\n\nInmemory dictionary built!...\n\n";

char ExitChoice = 'n';
while (ExitChoice!='y')
cout << "\n\nEnter Anagram: ";
cin >> Test;
for (int i = 0; i < Test.length(); i++)
Test[i] = tolower(Test[i]);

cout << "\n\nAnagrams for " << Test << ":\n\n";
cout << "\n\nDo you want to continue: y /n :";
cin >> ExitChoice;

cout << "\nEnd of code\n";


The code is NOT optimized. It can be sped up with simple multi-threading, but I have let go of those for simplicity.


Popular posts from this blog



What is GraphQL

It is a specification laid out by Facebook which proposed an alternative way to query and modify data. Think of it is an as a complimentary of REST/RPC.

Now head here and read the original graphQL documentation. It will take 2-3 hours tops but is a worthy read. This will help you build some impressions on it and help contrast against mine below:

Why use GraphQL

Core advantage

Instead of defining custom backend rpc/rest endpoints for every data-shape, graphql allows you to build a more general endpoint which give frontend/mobile engineers freedom and query and play with the data. It might be less efficient, add a bit more complexity (need for things like data-loader), harder to standardize and control client-contracts for. What it looses in complexity and control, it gains in flexibility and freedom - provided your data model is worth of a graphql-ish query How to tell if my data-model graphql-ish?Are there complex relationships between your models? Is there a need …

About me


I'm currently working as a software developer in the logistics industry in San Francisco. 
My aim is to impact billions of people's live for the better and make the world a better place.


Backend - Tech refresher 2019

Hello there

As a software engineer, it is important to keep updating your skillsets by learning the latest programming-elements (includes paradigms, patterns, languages, tools and frameworks). This becomes a bit easy if you already working on the cutting edge of something. Even then, it is possible to go too deep and loose breadth.

I've taken upon myself to do a tech refresher every year. The intent is to read, experiment and understand these elements by spending anywhere between 4 days to 4 weeks. The ultimate goal is: "do I know most that I need to know to build a planet-scale backend tech-stack ground up"

I'll write up my learnings in posts to help myself (and maybe others) refer it.

Here is the initial list I'm thinking about:

RedisMySQL, GraphQLAurora, Mesos, KubernetesCadence, SWSCassandra, MangoDB, NoSQL, MySQL, Spanner, S<, DynDBELKFlink, Storm, Samza, SparkHadoop HDFS, Yarn, MapReduceHive, HBaseKafka, ZookeeperNW: http, Tcp, thrift, grpc, yarpc, Terraform,…