Smart ontologies
The semantic web technologies introduced interesting ideas like RDF and semantic reasoning with OWL. We can produce new facts from old facts.
A Django ontology is just one kind of specification of an ontology It's used to create databases and generate ORM queries.
Infinity family has a rich ontology because it can be used to execute business. Business software like ERP has a complicated ontology.
Multiple kinds of things can be considered to be ontologies.
There are other technologies such as RDF and OWL which allows reasoning over relationships. There is an application I recommend called Protege which is very good for automated reasoning.
I can say that a mother is a female human with a child and then I can generate a fact when a woman is a mother.
Having knowledge graphs allows for powerful automated reasoning and automation opportunities.
In my fact collector project I use Prolog to do sime reasoning.
Inference means a query and find the free variable, such as X. Logic is a statement that is true. Here I ask two questions (1) who am I mutually friends with and (2) who am I friends with but who doesn't consider me a friend.
"Logic likes(sam, john).",
"Logic likes(sam, peter).",
"Logic likes(john, sam)."
"Inference and(likes(sam, X), likes(X, sam)).",
"Inference and(likes(sam, X), \+(likes(X, sam))).",
]
The answer to the first question is john. So the answer to the second question is peter
We need a rich specification of data relationships to create instances of ontologies.
With ontologies that define steps or temporal relationships like Datalog we can create automated workflow systems or automated interoperability
With ontologies we can traverse the system itself.
https://stackoverflow.com/questions/10263970/traversing-recording-matched-predicates
Create a polycontext metasymbol, and overcome the fact that standardization does not generalize.
Well, triples are redundant, because tuples are enough:
(a, b, c) = ((a, b), (b, c))
(the point (video) I made in an e-mail to [Telmo]).Thus, we can think of triple stores as just semantic indices. Indices speed up querying, yes, but but otherwise, they are redundant. When it comes to semantic indexing, then, it would make sense to make such said "triples" not just between more popular graph nodes, but hypergraph nodes as well (doing the power-set indexing would likely exhaust computational resources in most cases).
Is there at all such concept of "semantic indexing" in the literature? It seems nobody calls "making triple stores" for a database -- "semantic indexing".
Also that example was an inferred rule. The age less than 25 people use gmail is something that is learnt by the database based on the data.
It's a correlation of every piece of data with every other piece of data. Could be implemented with a simple loop and correlation function
The problem with computed properties in programming language - outside the database is that they're not very efficient. You would need truth maintenance which can be expensive if naively implemented.
Blazegraph (since acquired by Amazon) and Jena Fuseki are triple stores have truth maintenance features.
Don't discount what triple stores bring to the table.
If a database could have virtual properties that were implemented inside the database - also updated on any insert or changing data - then yes it could be efficient.
Isn't it just computed (virtual) properties to "sets of objects interlinked by desired properties" (a combined virtual object)?
For an example of computed properties, we can think of
"if someone is aged under 25 they only use Gmail as their mail provider based on database data"
as a single computed boolean property, namely
Object.use_only_gmail(age): age < 25 => True ? False
, to the objects that have "age" property. An implication can be viewed as just a property computation. The confidence level can be described, too, by simply computing the property, and observing that, in actuality this statement covers just 95% of cases.For an example of a combined virtual objects, consider the below query:
"Search for the cases, where collocation of exactly 2 objects aged above 25 had spawned 2 living objects aged below 1 during a period of less than 1 day."
Assuming that the occurrences of "spawning 2 objects" and "collocation" is not something that the database naturally tracks, computing such property would involve creating "combined virtual object" (say, an occurrence where graph pattern of spanning objects with collocation is observed), and then computing the boolean property to such virtual objects, answering that exactly 2 objects were spawned.
I don't see why we'd need triplets stores anymore: it's all more naturally doable with just computed properties and their patterns specified by queries. A pattern is just a "combined virtual object", so, a query is just a construction of a "template virtual object" (in fact, I've explained that in "purposefulness" section about desired data properties, when supplemented with metaformat). This would enable to query for any patterns imaginable.
I don't know how reasoning engines work but I think it's a replicated application of modus ponens
It would be nice if you had one built into a database or a prolog engine built into a database. You could generate facts like if someone is aged under 25 they only use Gmail as their mail provider based on database data.
Would be nice to have an ontology for computers and relationships between files, processes, threads, containers, permissions etc
Then we would have a simple data structure for everything that wasn't so implementation defined.
Reasoning on ontologies is a special case of querying of datasets, and most databases are just specialized ontologies, optimized for certain types of queries. Some databases, like triple-stores, may be optimized or logical inferences.
You're correctly noticing that Infinity family ontology is pragmatic from the business sense. In fact, I had worked on Odoo (previously OpenERP), which is a Wordpress-like framework for enterprises to run, and I thought (back in 2010) -- that AI-augmented corporations are already happening, so, we need a system that would enable them to be transparent with the society, and, since companies are just sums of people, I thought, there must exist common denominator between how individuals, companies, and even governments operate, and in fact, the Infinity family ontology is an attempt at arriving to that common denominator from the first principles, described in the paper on the equation model. A more concrete version of that, is the NRV (network resource vocabulary), the idea of which is to introduce something like HTTP response code numbers to responses, but rather, semantic codes to data objects.
In theory then, to make systems understandable, we can go around all the systems (such as each app) and data packets (such as the internet traffic), and project them in the human semantic space, -- by having such codes attached to their tables, requests and responses -- make all systems understood to humans, and even make them mathematically tractable.