Avoiding real English words in "short URLs", without sacrificing too much headroom
Assuming here that th开发者_StackOverflowe language in question is English, and the character sets used are basic ASCII / latin alphabet.
When generating "Short URLs", the first thought is often to use a large "code set"/alphabet to convert an integer (possibly an ID referencing the long URL in your database) to a high "base" (URL-friendly Base-64, for example). In my specific case, I first opted to normalize to Base-36 (numbers, latin letters, not case-sensitive).
However, upon closer inspection, one might find their Short URL generator eventually spitting out naughty words, or other common words, which may be quite undesirable.
One option to avoid generating "real words" would be to just strip out all of the common vowels.
Are there other/better workarounds that don't sacrifice too much headroom?
I think your idea to strip out the vowels will be your best best here.
Anything else, like blacklists, dictionary lookups, etc, will just be incredibly tedious, require a lot of maintenance and, ultimately, falible.
You could normalize to base-30 [0-9bcdfghj-np-tvwxz]
, which will simply never generate vowels and thus not generate real words.
You could separate your vowels and consonants (xxxddd_eeeaaa). If it's always longer than three letters you're probably safe with curse words.
Or you could insert numbers randomly.
Or you could create a filter.
of the three I'd probably stick with the first.
In order to sacrifice only little information per digit but at the same time prevent as much meaning as possible, you should probably leave out the most frequent letters in english. This will be slightly more efficient than simply skipping all vowels.
精彩评论