Robots That Reproduce

AI, how does it invent ideas, and act on the implementation of them? Necessity is the mother of invention (trending keywords) and you cannot invent the wheel, you can only re-invent it, and all great discoveries, like America or the microprocessor, occur by accident, but hard and smart explorations must be employed with well-prepared goal setting techniques.

Here’s how my logic works. Fred Flintstone almost spun off the road because of rock tires, as they are too hard and the ride is bumpy, exclaiming “I can hardly wait until they make rubber tires”. So the status quo is rock tires (“rock” is more adjective than noun, “tire” is the noun), and there is a need for something better because a “rock” is bad (robots are not judgmental, at first, but there is an underlying assumption most people want change for the better, so the human data-feeding process and the reading of and reaction to it will lead to change for the better). So it has synonyms, mostly adjectives based on a definition search – hard, solid, dense, etc. The solution is adjectives that are the antonyms, like soft, pliable, spongy, etc. The search for related nouns requires a search, for the adjectives, and sometimes the library of words in the definition must be searched with what’s called a “reverse lookup”. Words have definitions based on the context in which they are used in sentences, but isolating words as nouns, adjectives, noun/adjective combinations, and of course verbs are critical in other examples, so the reverse lookup is likely to produce the word “rubber” or other synonyms. Synonyms, in most cases, don’t mean exactly the same thing, but the “degree of overlap” can be measured by the word that comes up most often with the reverse lookup search with the most starter words, for the purpose of this example, noun/adjective combinations. This is critical as “polymer”, and different examples of polymers, and rubber is one of them, will allow robots to descend outward to high-frequency word matches in corresponding dictionaries/thesauruses or general linked documents (not necessarily a synonym or antonym). “Viscosity” is one example, meaning the degree of resistance to change in shape, and a best numeric value interpolated from other factors can be looked-up in the right page/database, or a page/database the robot descends to. Examples include tensile strength of hubs, bearings, size of bearings and the best lubricant with the best viscosity, studies on road topographies that become smoother and improve, and trends analysis for anticipated numbers of these words, maybe even new words like compound words that became “nano-energy” based on “nano-joule” and “energy” being fed into sub-factors that are new and create an effect on a seed keyword like “energy conversion ratio”. Unnecessary transitions like “EGA” and “ISDN” must be understood in advance, with more flexibility in the spectra of word searches, which may have “tire”, “wheel” definition searches lead to studies about correlated words like “transportation”, and no need for wheels or cars, but remote-controlled helicopters (but compromise can be good, like a helicopter with wheels for terrestrial roads, when weather and/or air traffic congestion makes air travel unsafe, when an “unnecessary transition search” descends from “tire” to “wheel” to “transportation” to “helicopter” to “helicopter wheel tire” becomes a formality for comparing what historic modes of transportation have in common). So remember, rules, although thought to be necessary to restore temporary sanity, in the long run lead to micro-management and all the bad things in life, so “definition interpolators”, a nuances-of-gray definition of a word based on its relationship to other words in the sentence, will lead to less-stress, and the type B person/robot, in the long run, is more productive and happy than the type A personality.

Fred Flintstone avoiding TV salesmanI like to use the “Flintstones” instead of the “Jetsons” for all my examples, as the past just repeats itself and the anarcho-primitivists strive for more “learn, live, learn” perfection than the spoiled techno-anarchists of the future. And this is especially true for salesmen as anti-salesmen, like when Fred got mad at some TV salesman and turned off the TV, and the arm of the TV salesman reached out of the TV tube and turned the TV back on. With proxy scrapers, the anonymous robot can hit some radio buttons, pull down and click on some menu options, and force-feed-Fred some must-have information!

So intense automated trends analysis, with scrape, spin, post, and templates at first optimized by humans for word distribution, with a spin of words in one template, and multiple scrape table templates (data merging, or template spins) where multiple templates of multiple scrape tables, that access mostly search engine directory dumps with tabular data not indexed by Google, or obtained quickly, cheaply, and high resolution, with super original data production speeds, by audio/video cameras, with images processing/analysis (eg. facial recognition software, first employed by the Tampa Police Department in 2003) and sensors that produce all information corresponding to the five senses, can be optimized for the best Pagerank or at first Woodrank (proactive Pagerank-analysis to stay one step ahead of Pagerank’s ideal formula changes) and will eventually lead to audio-video-olfactory-taste-touch indexing and Pagerank/Woodrank considerations. Best-fitting fill-in words, like articles, conjunctions, pronouns, linking verbs, and so on, can be inserted to separate the more important nouns, adjectives, verbs, and in some cases those three examples involving non-seed words can be inserted, based on a search of the highest ranking pages, isolated sentences, and finding the best fill-in words that match a search for the seed words, with a grammar checker to rate new sentence based on best grammar because of rules designed to keep sequencing perfect (no comma slices, dangling particles, etc.) and/or search again for the next highest ranking sentence, grammar check, until minimum grammar score-requirement is obtained. There are a number of good “parsing” algorithms already.

Trends analysis will predict the future, more accurate with more sub-factors that effect the seed keywords, usually looked up as related queries and related topics, and a word tracker lookup of synonyms and fluctuations of keywords, prioritized by high KEI (searches per month squared divided by competition with literal match with keywords in quotes). You can be a jack of all trades and represent the world community with a search of the top 1,000 trending keywords on Google Trends home page (good to do weekly as Googlebot indexes 500 pages every 3 days for zero Pagerank websites), or a master of one with a limited variety of keywords/interests. The first margin of error is higher when the past data for the composite wave or wave-packet is “ratty” and there will be standard deviation analysis of real versus ideal wave-fit that effects the long term margin of error predictions. The margin of error goes up over time, at first at a faster pace, then rises at a slower pace, and can be understood with saddle-curve regression. But everything is Fourier or “wave-like” from a certain point of view, and like a power series that becomes the number “Pi”, no infinitely-well-known precision because it’s an irrational number with an infinite number of digits, but a negligible margin of error, and my theory “there is no such thing as a coincidence” may be dis-proven. Sharing this information from a “competition is good” mentality means the world community effects the trend for the better, and the margin of error goes down over time. But you are certainly welcome to “corner the market” to prosper from that which most firms treasure as company confidential in an overly competitive market and therefore essential, especially as a start-up.

Predictions made with Fourier calculations (looking at the past for the wave-like nature of everything, or sine-curve regression, where waves have amplitude, frequency, and phase, and all three change and then you have rates of change, for more degrees of changing rates for a greater understanding of the inherit “repeatability” of the phenomenon you are studying). Data from the more distant past is deprecated and the more recent past appreciated, by a linear regression or best-fitting-line correction, and the “degree of trendiness”, or DOT, as a constant, assumes as of 2017 that if there is a 5-year record of data, we predict the next five year trend, and the DOT is just the slope of the line for the next five years based on linear regression analysis, but will be corrected for “the near-term market is more predictable and important than the long term market”, so the value of the DOT is deprecated over time also with a linear correction. Time intervals evaluated can be input by the researcher, we all have a different idea what is the best time interval, it must be a week or greater than now because of the limitations of Google Trends, most people are interested in the three month trend as fiscal quarters are three months. We can produce a second less complicated DOT for the exact time interval you insert, with just one slope calculation, for those who want to isolate short-term versus the long-term market evaluations.

Join as a seller to learn how to easily setup your affiliate tracking software, attract resellers with our own affiliate program to earn commission off new sellers and eventually affiliates, and access a large inventory of eager 100% permission-based affiliate recipients, and easily populate your affiliate application for new affiliates with automated usernames/passwords and enter PayPal e-mail addresses of affiliates, easily pay mass commissions for peer-to-peer-sent surfers/customers and pay profit-sharing or bulk “bonus commission” when you and your affiliates give you a higher Pagerank and more search engine customers, because they understand site-to-site surfers redirected as potential customers are not as common as those who look you up at Google, where if the surfer doesn’t find what they want, they usually go to a new search result. For those of you who are Web programmers (PHP/MySQL/Javascript/JQuery/Node JS/Singular JS, Python/MongoDB), you will be considered for team development, and you will have a chance to register with WordPress administrator rights on a separate platform, with a concurrent versioning system, to create plugins, such as Fourier calculators, scrape/spin/post/clean/solicit algorithms that become automated, better filters for robot block/admit based standards, with retroactive cleanup of old backlinks and content, on all possible affiliate-desired-and-assigned criteria, immediate seller payment confirmation, and subtraction of all low-or-non-paying sellers input into the cleaning plugin, and so on, open source with charges for upgrades, with profit sharing as affiliates for all approved programmers (prospective programmers will answer exam essay questions, voted on by other applicants, higher vote-values for approved applicants based on the number of days after the applicant is approved [eg. 365 days/one-day for what may be the first approved applicant, 30 days/one-day for what may be the person second in seniority, and so on]).

In a nut-shell, we are almost at a point where assembly-lines will manufacture cookie-cutters, not just cookies, as software controls the hardware or mechanical aspect of robots that collect raw materials and convert them into something useful, trial-and-error, and automation will be 100% independent.

Make sure you read my Building Pyramids page for the best success in building strong, permanent, multi-tier pyramids.