Supercategories for Public Intelligence Standardization
Create standard super-categories for classification of data sources, based on the equation ontology, and make data more actionable.
So the idea is that, using the equation model ontology (F(X)=Y), we could create specific supercategories to classify classification systems and data sources. Check the collaborative document. For example, version-zero of it may look something like this:
1. Goals 200
- TSV 210: Technological-Scientific Vision.
- NPL 210: National Policy Goal,
- NLG 210: National Legislature Goal,
- RLG 210: Regional Legislature Goal,
- OP 210: Organization Policy Goal,
- RDG 210: Regional Development Goal,
- NMG 210: NGO/NPO Mission Goal,
- ECI 210: Ethnic-Cultural Intent,
- INTT 210: Inter-National Treaty,
2. Ideas 400
- ICAT 220: Industry Category Code,
- PCAT 220: Product Category Code (e.g., HS),
- ACAT 220: Economic Activity Category Code (e.g., NACE, SIC, NAICS),
- PTN 450: Patent Number,
- SPN 420: Scientific Publication Number (e.g., DOI),
- TRP 430: Technical Report,
- PROT 430: Laboratory or Medical Protocol (e.g., Protocol-Online),
- INBS 410: Innovative Brainstoriming Idea (e.g., Halfbakery),
- CD-REPO 430: Code Repository (ideas for runtime processes),
- ISTD 220: Industrial Standard Code.
3. Plans 300
- CPN 330: Company Project Name,
- CSPN 330: Consortium Programme,
- PPN 360: Personal Project Name,
- PCN 360: Project Codename,
- MID 360: Mission ID.
4. Operations 500
- CPU-OPS 510: Floating Point Operation,
- NET-RQST 511: Network Request Operation,
- UI-MOVE 510: User Interface Movement,
- ORG-TASK 530: Organization Task,
- ORG-PROD 533: Organization Product (manufacturing operation),
- TRD-ORD 520: Market Trade Order (GAAP taxonomy, IFRS taxonomy),
- MTF 526: Money Transfer,
- ATF 523: Asset Transfer (e.g., Shipment),
- ITF 511: Information Transfer (e.g., Message, File Upload, etc., overlaps with NET-RQST),
- MED-OP 534: Medical Operation,
- LAB-OP 534: Laboratory Operation,
- WEB-DEPLOYMENTS 511: CI/CD-based online systems deployment operation.
5. Assets 300
↳ 1) Agents 330, 370
- CRED 330: Company Registry ID (e.g., D-U-N-S),
- CNID 330: Company National ID,
- INID 370: Individual National ID,
- SNET 330, 370: Social Net ID.
↳ 2) Things 310, 320, 460, 470, 480
- NREIDs 310: National Real Estate IDs (e.g., Cadastre),
- NTEID 310: National Tangible Asset IDs (e.g., National car registry, National Phone registry),
- INSTRID 480: Instrumentation/Industrial Machinery ID,
- COMIDS 470: Commodity Product Unit ID,
- FINIDS 470: Financial Product Unit ID,
- WASID 320: (Web Asset IDs, e.g., MAC address).
↳ 3) Topics 100*
6. Places 150*
- RLOC 151: Real Location (e.g., Address), WGS, WCS/FITS,
- VLOC 153: Virtual Location (e.g., IP, IPv6 address, neural net region), DNS (Computer/phone address registries ( 4.3+ bn. IP addresses )), locations of thoughts in neural networks, etc.
7. Events 120
The sole purpose is to introduce, evolve and maintain data alignment protocol.
Global financial think-tank for pursuing goals together.
Actually, through time and reflections on Network of Functions and World Mapping Assistant, I had come up with a higher level categorization system that is more compact and usable. It revolves around the concept of Systems, and involves only 5 classes of concepts:
400: Method, and
500: Operation. The details are on V2 ("Network Resource Vocabulary"). I currently use it to organize all crawled data. It follows a similar pattern how we categorize the HTTP responses with HTTP Status Codes. Perhaps these supercategories here could be used to extend that network resource vocabulary.
I wonder, is there something similar already done by others, and what approaches had they come up with.
There is something more to consider. Today, we have companies deep-learning specific models to answer specific questions. For example, identity and face recognition models, weather models, etc., and these specific models are being used as a resource by integrative decision systems to make decisions.
So, just like we had layers of abstraction while building network protocols one upon another (e.g., layers in OSI model), we could actually have standards for deep-learned models, build social AI from ground up, combining multiple standardized AI models.
Having versioned and standardized machine-learned models would allow us to work on specifying the qualities and blind-spots of these models, and take actions to confidently version, incrementally improve, and use them in derived applications.
For example, imagine that definition of a concept "Manga" is defined not by a dictionary, but by an ANN, like Manga GAN, and becomes something like an ISO standard model of what "Manga" looks like. Many AI systems are already versioned, like, for example Google Translate, and the properties of them are known. So, think of many concepts and complex phenomena that we build AI models of, and standardize.
Perhaps this comment merits a separate post, of an idea of ISO standardization for AI models.
//while importing datasets How do I import a dataset?
Currently, while importing datasets, started using it, auto-generating categories for sources, like so:
Y:IDEA:TRP:NTRS, to refer to NASA Technical Reports Server.
May be useful even for making order in our categories for imported data even here, on 0 -> oo :)