The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      Failed to parse string: 'RE42167' as a scalar of type int64
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2102, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1949, in array_cast
                  return array.cast(pa_type)
                File "pyarrow/array.pxi", line 996, in pyarrow.lib.Array.cast
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/compute.py", line 404, in cast
                  return call_function("cast", [arr], options, memory_pool)
                File "pyarrow/_compute.pyx", line 590, in pyarrow._compute.call_function
                File "pyarrow/_compute.pyx", line 385, in pyarrow._compute.Function.call
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Failed to parse string: 'RE42167' as a scalar of type int64
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

appl_id
int64
flag_patent
int64
doc_id
int64
claim_sequence
int64
claim_text
string
dependent
string
claim_number
int64
12,617,487
1
10,001,384
0
1. A method comprising: receiving, by an apparatus, data specifying a reference point-of-interest specified by a user and location data of a search region; retrieving, by the apparatus, a reference vector specifying a plurality of features associated with the reference point-of-interest; determining, by the apparatus, a plurality of candidates for similar points-of-interest based, at least in part, on the search region; retrieving, by the apparatus, candidate feature vectors specifying a plurality of features associated with respective candidates by at least real-time extracting semantic topics from one or more text descriptions and one or more user reviews for each of the candidates using a language model assigned a probability to each word thereof, and generating entries of the semantic topics in the feature vectors for each of the candidates, wherein the user reviews include one or more reviews by the user; determining, by the apparatus, a similarity score for each of the candidates via comparing the plurality of features of respective candidates to the plurality of features of the reference point-of-interest to determine a number of common features (n) shared between the plurality of features of the respective candidates and the plurality of features of the reference point-of-interest, and calculating the similarity score using an equation including a weighting vector (w), the reference feature vector (r), a respective candidate feature vector (p): similarity ⁢ ⁢ ( r , p ) = ∑ i = 1 -> n ⁢ w i ⁢ r i ⁢ p i where i=1 to the number of common features shared between the reference vector and the candidate feature vectors; and generating, by the apparatus, a list of one or more similar points-of-interest based on the similarity scores.
null
1
12,617,487
1
10,001,384
1
2. The method of claim 1 , wherein one or more user preferences of a target user are common in the reference feature vector and the at least one other feature vector, and the method further comprising: retrieving a machine learning algorithm; and adjusting the weighting vector based on the one or more user preferences using the machine learning algorithm.
claim 1
2
12,617,487
1
10,001,384
2
3. The method of claim 1 , further comprising: selecting one of the candidates for presentation based on the similarity scores, wherein the one candidate is associated with candidate data describing the candidate; and causing, at least in part, actions leading to the presentation of the candidate data.
claim 1
3
12,617,487
1
10,001,384
3
4. The method of claim 1 , wherein the features comprise classification features, tag features, price features, ratings features, the semantic topics, or a combination thereof.
claim 1
4
12,617,487
1
10,001,384
4
5. The method of claim 1 , wherein the data is received from a user equipment and the search region is determined based on a location of the user equipment.
claim 1
5
12,617,487
1
10,001,384
5
6. The method of claim 1 , wherein the plurality of features is based on a taxonomy mapping tree structure that classifies the reference point-of-interest and the plurality of candidates into one or more categories, and wherein each node of taxonomy mapping tree structure increases in classification specificity.
claim 1
6
12,617,487
1
10,001,384
6
7. An apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following, receive data specifying a reference point-of-interest specified by a user and location data of a search region; retrieve a reference vector specifying a plurality of features associated with the reference point-of-interest; determine a plurality of candidates for similar points-of-interest based, at least in part, on the search region; retrieve candidate feature vectors specifying a plurality of features associated with respective candidates by at least real-time extracting semantic topics from one or more text descriptions and one or more user reviews for each of the candidates using a language model assigned a probability to each word thereof, and generating entries of the semantic topics in the feature vectors for each of the candidates, wherein the user reviews include one or more reviews by the user; determine a similarity score for each of the candidates via comparing the plurality of features of respective candidates to the plurality of features of the reference point-of-interest to determine a number of common features (n) shared between the plurality of features of the respective candidates and the plurality of features of the reference point-of-interest, and calculating the similarity score using an equation including a weighting vector (w), the reference feature vector (r), a respective candidate feature vector (p): similarity ⁢ ⁢ ( r , p ) = ∑ i = 1 -> n ⁢ w i ⁢ r i ⁢ p i where i=1 to the number of common features shared between the reference vector and the candidate feature vectors; and generate a list of one or more similar points-of-interest based on the similarity scores.
null
7
12,617,487
1
10,001,384
7
8. The apparatus of claim 7 , wherein one or more user preferences of a target user are common in the reference feature vector and the at least one other feature vector, and the apparatus is further caused, at least in part, to: retrieve a machine learning algorithm; and adjust the weighting vector based on the one or more user preferences using the machine learning algorithm.
claim 7
8
12,617,487
1
10,001,384
8
9. The apparatus of claim 7 , wherein the apparatus is further caused, at least in part, to: select one of the candidates for presentation based on the similarity scores, wherein the one candidate is associated with candidate data describing the candidate; and cause, at least in part, actions leading to the presentation of the candidate data.
claim 7
9
12,617,487
1
10,001,384
9
10. The apparatus of claim 7 , wherein the features comprise classification features, tag features, price features, ratings features, the semantic topics, or a combination thereof.
claim 7
10
12,617,487
1
10,001,384
10
11. The apparatus of claim 7 , wherein the data is received from a user equipment and the search region is determined based on a location of the user equipment.
claim 7
11
12,617,487
1
10,001,384
11
12. The apparatus of claim 7 , wherein the plurality of features is based on a taxonomy mapping tree structure that classifies the reference point-of-interest and the plurality of candidates into one or more categories, and wherein each node of taxonomy mapping tree structure increases in classification specificity.
claim 7
12
12,617,487
1
10,001,384
12
13. A non-transitory computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause an apparatus to at least perform the following steps: receiving data specifying a reference point-of-interest specified by a user and location data of a search region; retrieving a reference vector specifying a plurality of features associated with the reference point-of-interest; determining a plurality of candidates for similar points-of-interest based, at least in part, on the search region; retrieving candidate feature vectors specifying a plurality of features associated with respective candidates by at least real-time extracting semantic topics from one or more text descriptions and one or more user reviews for each of the candidates using a language model assigned a probability to each word thereof, and generating entries of the semantic topics in the feature vectors for each of the candidates, wherein the user reviews include one or more reviews by the user; determining a similarity score for each of the candidates via comparing the plurality of features of respective candidates to the plurality of features of the reference point-of-interest to determine a number of common features (n) shared between the plurality of features of the respective candidates and the plurality of features of the reference point-of-interest, and calculating the similarity score using an equation including a weighting vector (w), the reference feature vector (r), a respective candidate feature vector (p): similarity ⁢ ⁢ ( r , p ) = ∑ i = 1 -> n ⁢ w i ⁢ r i ⁢ p i where i=1 to the number of common features shared between the reference vector and the candidate feature vectors; and generating a list of one or more similar points-of-interest based on the similarity scores.
null
13
12,617,487
1
10,001,384
13
14. The computer-readable storage medium of claim 13 , wherein one or more user preferences of a target user are common in the reference feature vector and the at least one other feature vector, and the apparatus is caused, at least in part, to further perform: retrieving a machine learning algorithm; and adjusting the weighting vector based on the one or more user preferences using the machine learning algorithm.
claim 13
14
12,617,487
1
10,001,384
14
15. The computer-readable storage medium of claim 13 , wherein the apparatus is caused, at least in part, to further perform: selecting one of the candidates for presentation based on the similarity scores, wherein the one candidate is associated with candidate data describing the candidate; and causing, at least in part, actions leading to the presentation of the candidate data.
claim 13
15
12,617,487
1
10,001,384
15
16. The computer-readable storage medium of claim 13 , wherein the features comprise classification features, tag features, price features, ratings features, the semantic topics, or a combination thereof.
claim 13
16
12,617,487
1
10,001,384
16
17. The computer-readable storage medium of claim 13 , wherein the plurality of features is based on a taxonomy mapping tree structure that classifies the reference point-of-interest and the plurality of candidates into one or more categories, and wherein each node of taxonomy mapping tree structure increases in classification specificity.
claim 13
17
14,820,370
1
10,001,759
0
1. A method of automatically generating an events dictionary in an Internet of Things (IoT) network, comprising: receiving a notification of a first event from a first IoT device in the IoT network; determining a state of the first IoT device before and after the first event; comparing the states of the first IoT device; determining a type of state change of the first event based on the comparing; determining whether the type of the state change of the first event is present in the events dictionary; creating a generic entry based on the type of the state change of the first event not being present in the events dictionary, wherein the type of the state change associated with the generic entry is common to IoT devices of a same type and/or class as the first IoT device; and storing, in the events dictionary, a mapping of an event description of the first event to the generic entry.
null
1
14,820,370
1
10,001,759
1
2. The method of claim 1 , wherein determining the state of the first IoT device before the first event includes periodically polling the first IoT device to retrieve the state of the first IoT device before the first event, and wherein determining the state of the first IoT device after the first event includes retrieving the state of the first IoT device after the first event.
claim 1
2
14,820,370
1
10,001,759
2
3. The method of claim 1 , further comprising: determining a type of the first IoT device, wherein the creating comprises creating the generic entry in the events dictionary for a type of IoT device matching the type of the first IoT device and a type of state change matching the type of the state change of the first event.
claim 1
3
14,820,370
1
10,001,759
3
4. The method of claim 1 , wherein the generic entry comprises an enumeration and a text description of the type of the state change associated with the generic entry.
claim 1
4
14,820,370
1
10,001,759
4
5. The method of claim 1 , further comprising: receiving a second notification of a second event by a second IoT device in the IoT network; determining a state of the second IoT device before and after the second event; comparing the states of the second IoT device; determining a type of state change of the second event based on the comparing; mapping the second event to the generic entry based on the second event being a same type of state change as the state change of the first event and a same type and/or class as the first IoT device; and storing a mapping of an event description of the second event received from the second IoT device to the generic entry.
claim 1
5
14,820,370
1
10,001,759
5
6. The method of claim 5 , wherein the event description of the second event is different from the event description of the first event.
claim 5
6
14,820,370
1
10,001,759
6
7. The method of claim 5 , wherein the generic entry describes a generic state change that is common to the first IoT device and the second IoT device, and further wherein, the generic entry includes event descriptions for events received from the first IoT device and the second IoT device.
claim 5
7
14,820,370
1
10,001,759
7
8. The method of claim 1 , further comprising: transmitting the events dictionary to other IoT devices in the IoT network.
claim 1
8
14,820,370
1
10,001,759
8
9. The method of claim 8 , further comprising: defining home automation rules based on generic events defined in the events dictionary; and distributing the home automation rules to the other IoT devices in the IoT network.
claim 8
9
14,820,370
1
10,001,759
9
10. The method of claim 9 , wherein a third IoT device in the IoT network receives an event notification from the first or a second IoT device in the IoT network, maps event information in the received event notification to the generic entry in the events dictionary, and executes the home automation rules defined for the generic entry in the events dictionary.
claim 9
10
14,820,370
1
10,001,759
10
11. An apparatus for automatically generating an events dictionary in an Internet of Things (IoT) network, comprising: a transceiver configured to receive a notification of a first event from a first IoT device in the IoT network; and at least one processor configured to: determine a state of the first IoT device before and the first event; compare the states of the first IoT device; determine a type of state change of the first event based on the comparison of the states of the first IoT device; determine whether the type of the state change of the first event is present in the events dictionary; and create a generic entry based on the type of the state change of the first event not being present in the events dictionary, wherein the type of the state change associated with the generic entry is common to IoT devices of a same type and/or class as the first IoT device; and a memory configured to store, in the events dictionary, a mapping of an event description of the first event to the generic entry.
null
11
14,820,370
1
10,001,759
11
12. The apparatus of claim 11 , wherein the at least one processor being configured to determine the state of the first IoT device before the first event includes the at least one processor being configured to periodically poll the first IoT device to retrieve the state of the first IoT device before the first event, and wherein the at least one processor being configured to determine the state of the first IoT device after the first event includes the at least one processor being configured to retrieve the state of the first IoT device after the first event.
claim 11
12
14,820,370
1
10,001,759
12
13. The apparatus of claim 11 , wherein the at least one processor is further configured to determine a type of the first IoT device, wherein the at least one processor being configured to create comprises the at least one processor being configured to create the generic entry in the events dictionary for a type of IoT device matching the type of the first IoT device and a type of state change matching the type of the state change of the first event.
claim 11
13
14,820,370
1
10,001,759
13
14. The apparatus of claim 11 , wherein the generic entry comprises an enumeration and a text description of the type of the state change associated with the generic entry.
claim 11
14
14,820,370
1
10,001,759
14
15. The apparatus of claim 11 , wherein the transceiver is further configured to receive a second notification of a second event broadcasted by a second IoT device in the IoT network; wherein the at least one processor is further configured to: determine a state of the second IoT device before and after the second event; compare the states of the second IoT device; determine a type of state change of the second event based on a comparison of the states of the second IoT device; and map the second event to the generic entry based on the second event being a same type of state change as the state change of the first event and a same type and/or class as the first IoT device; and wherein the memory is further configured to store a mapping of an event description of the second event received from the second IoT device to the generic entry.
claim 11
15
14,820,370
1
10,001,759
15
16. The apparatus of claim 15 , wherein the event description of the second event is different from the event description of the first event.
claim 15
16
14,820,370
1
10,001,759
16
17. The apparatus of claim 15 , wherein the generic entry describes a generic state change that is common to the first IoT device and the second IoT device, and further wherein, the generic entry includes event descriptions for events received from the first IoT device and the second IoT device.
claim 15
17
14,820,370
1
10,001,759
17
18. The apparatus of claim 11 , wherein the transceiver is further configured to transmit the events dictionary to other IoT devices in the IoT network.
claim 11
18
14,820,370
1
10,001,759
18
19. The apparatus of claim 18 , wherein the at least one processor is further configured to define home automation rules based on generic events defined in the events dictionary; and wherein the transceiver is further configured to distribute the home automation rules to the other IoT devices in the IoT network.
claim 18
19
14,820,370
1
10,001,759
19
20. The apparatus of claim 19 , wherein a third IoT device in the IoT network receives an event notification from the first or a second IoT device in the IoT network, maps event information in the received event notification to the generic entry in the events dictionary, and executes the home automation rules defined for the generic entry in the events dictionary.
claim 19
20
14,820,370
1
10,001,759
20
21. A non-transitory computer-readable medium for automatically generating an events dictionary in an Internet of Things (IoT) network, comprising: at least one instruction instructing a device to receive a notification of a first event from a first IoT device in the IoT network; at least one instruction instructing the device to determine a state of the first IoT device before and after the first event; at least one instruction instructing the device to compare the states of the first IoT device; at least one instruction instructing the device to determine a type of state change of the first event based on a comparison of the states of the first IoT device; at least one instruction instructing the device to determine whether the type of the state change of the first event is present in the events dictionary; at least one instruction instructing the device to create a generic entry based on the type of the state change of the first event not being present in the events dictionary, wherein the type of the state change associated with the generic entry is common to IoT devices of a same type and/or class as the first IoT device; and at least one instruction instructing the device to store, in the events dictionary, a mapping of an event description of the first event to the generic entry.
null
21
14,820,370
1
10,001,759
21
22. The non-transitory computer-readable medium of claim 21 , wherein the at least one instruction instructing the device to determine the state of the first IoT device before the first event includes at least one instruction instructing the device to periodically poll the first IoT device to retrieve the state of the first IoT device before the first event, and wherein the at least one instruction instructing the device to determine the state of the first IoT device after the first event includes at least one instruction instructing the device to retrieve the state of the first IoT device after the first event.
claim 21
22
14,820,370
1
10,001,759
22
23. The non-transitory computer-readable medium of claim 21 , further comprising: at least one instruction instructing the device to determine a type of the first IoT device, wherein the at least one instruction instructing the device to create comprises at least one instruction instructing the device to create the generic entry in the events dictionary for a type of IoT device matching the type of the first IoT device and a type of state change matching the type of the state change of the first event.
claim 21
23
14,820,370
1
10,001,759
23
24. The non-transitory computer-readable medium of claim 21 , wherein the generic entry comprises an enumeration and a text description of the type of the state change associated with the generic entry.
claim 21
24
14,820,370
1
10,001,759
24
25. The non-transitory computer-readable medium of claim 21 , further comprising: at least one instruction instructing the device to receive a second notification of a second event from a second IoT device in the IoT network; at least one instruction instructing the device to determine a state of the second IoT device before and after the second event; at least one instruction instructing the device to compare the states of the second IoT device; at least one instruction instructing the device to determine a type of state change of the second event based on a comparison of the states of the second IoT device; at least one instruction instructing the device to map the second event to the generic entry based on the second event being a same type of state change as the state change of the first event and a same type and/or class as the first IoT device; and at least one instruction instructing the device to store a mapping of an event description of the second event received from the second IoT device to the generic entry.
claim 21
25
14,820,370
1
10,001,759
25
26. The non-transitory computer-readable medium of claim 25 , wherein the event description of the second event is different from an event description of the first event.
claim 25
26
14,820,370
1
10,001,759
26
27. The non-transitory computer-readable medium of claim 25 , wherein the generic entry describes a generic state change common to the first IoT device and the second IoT device and stores event descriptions for events received from the first IoT device and the second IoT device.
claim 25
27
14,820,370
1
10,001,759
27
28. The non-transitory computer-readable medium of claim 21 , further comprising: at least one instruction instructing the device to transmit the events dictionary to other IoT devices in the IoT network.
claim 21
28
14,820,370
1
10,001,759
28
29. The non-transitory computer-readable medium of claim 28 , further comprising: at least one instruction instructing the device to define home automation rules based on generic events defined in the events dictionary; and at least one instruction instructing the device to distribute the home automation rules to the other IoT devices in the IoT network.
claim 28
29
14,820,370
1
10,001,759
29
30. An apparatus for automatically generating an events dictionary in an Internet of Things (IoT) network, comprising: means for receiving configured to receive a notification of a first event from a first IoT device in the IoT network; means for processing configured to: determine a state of the first IoT device before and after the first event; compare the states of the first IoT device; determine a type of state change of the first event based on a comparison of the states of the first IoT device; determine whether the type of the state change of the first event is present in the events dictionary; and create a generic entry based on the type of the state change of the first event not being present in the events dictionary, wherein the type of the state change associated with the generic entry is common to IoT devices of a same type and/or class as the first IoT device; and means for storing configured to store, in the events dictionary, a mapping of an event description of the first event to the generic entry.
null
30
14,748,216
1
10,001,760
0
1. A system for controlling a vehicle using a set of competing adaptable predictive models of the dynamics of the vehicle, the system comprising: one or more processors and a non-transitory computer-readable medium having executable instructions encoded thereon such that when executed, the one or more processors perform operations of: using each competing adaptable predictive model, generating a set of predicted sensory data based on prior sensory data and prior vehicle control inputs, each competing adaptable predictive model comprising a neural network; comparing each set of predicted sensory data with subsequent sensory data collected from the vehicle; using the comparison of each set of predicted sensory data, automatically identifying one of the competing adaptable predictive models as a match; if a match is identified, modifying the adaptable predictive model that is the match according to a second set of sensory data, and controlling the vehicle using control outputs generated using the match; and if a match is not identified, generating a new adaptable predictive model through automatic model learning using the second set of sensory data, and controlling the vehicle using control outputs generated using the new adaptable predictive model.
null
1
14,748,216
1
10,001,760
1
2. The system as set forth in claim 1 , wherein the one or more processors further perform an operation of storing the matching adaptable predictive model or the new adaptable predictive model in a model database comprising the set of competing adaptable predictive models.
claim 1
2
14,748,216
1
10,001,760
2
3. The system as set forth in claim 2 , wherein the one or more processors further perform an operation of comparing a history of sensory data and previous control outputs for each competing adaptable predictive model in the set of competing adaptable predictive models against current sensory data describing the vehicle's current state in order to identify a match.
claim 2
3
14,748,216
1
10,001,760
3
4. The system as set forth in claim 3 , wherein the one or more processors further perform an operation of extrapolating current sensory data to new data regions.
claim 3
4
14,748,216
1
10,001,760
4
5. The system as set forth in claim 4 , wherein the one or more processors further perform an operation of using the matching adaptable predictive model in the set of competing adaptable predictive models while the new adaptable predictive model is being generated.
claim 4
5
14,748,216
1
10,001,760
5
6. The system as set forth in claim 1 , wherein j identification of the match in the set of competing adaptable predictive models is performed using a down-selection method that compares the set of competing adaptable predictive models over at least two time intervals of current sensory data, wherein the down-selection method ensures that when selecting a new adaptable predictive model, a selection is made only between those competing adaptable predictive models that are consistent with the history of sensory data.
claim 1
6
14,748,216
1
10,001,760
6
7. The system as set forth in claim 1 , wherein each competing adaptable predictive model is a feed forward neural network.
claim 1
7
14,748,216
1
10,001,760
7
8. The system as set forth in claim 1 , wherein each competing adaptable predictive model is trained or designed for a distinct vehicle driving scenario.
claim 1
8
14,748,216
1
10,001,760
8
9. The system as set forth in claim 6 , wherein a majority of the set of competing adaptable predictive models are compared over a first time interval of current sensory data having a length, and only the matches within that majority are compared over a second time interval of current sensory data within the first time interval having a length that is shorter than the length of the first time interval.
claim 6
9
14,748,216
1
10,001,760
9
10. The system as set forth in claim 9 , wherein the set of competing adaptable predictive models is compared over a cascade of time intervals of current sensory data having progressively shorter time intervals, wherein a number of the set of competing adaptable predictive models compared is progressively reduced.
claim 9
10
14,748,216
1
10,001,760
10
11. A computer-implemented method for controlling a vehicle using a set of competing adaptable predictive models of the dynamics of the vehicle, comprising: an act of causing a data processor to execute instructions stored on a non-transitory memory such that upon execution, one or more processors perform operations of: using each competing adaptable predictive model, generating a set of predicted sensory data based on prior sensory data and prior vehicle control inputs, each competing adaptable predictive model comprising a neural network; comparing each set of predicted sensory data with subsequent sensory data collected from the vehicle; using the comparison of each set of predicted sensory data, automatically identifying one of the competing adaptable predictive models as a match; if a match is identified, modifying the adaptable predictive model that is the match according to a second set of sensory data, and controlling the vehicle using control outputs generated using the match; and if a match is not identified, generating a new adaptable predictive model through automatic model learning using the second set of sensory data, and controlling the vehicle using control outputs generated using the new adaptable predictive model.
null
11
14,748,216
1
10,001,760
11
12. The method as set forth in claim 11 , wherein the one or more processors further perform an operation of storing the matching adaptable predictive model or the new adaptable predictive model in a model database comprising the set of competing adaptable predictive models.
claim 11
12
14,748,216
1
10,001,760
12
13. The method as set forth in claim 12 , wherein the one or more processors further perform an operation of comparing a history of sensory data and previous control outputs for each competing adaptable predictive model in the set of competing adaptable predictive models against current sensory data describing the vehicle's current state in order to identify a match.
claim 12
13
14,748,216
1
10,001,760
13
14. The method as set forth in claim 13 , wherein the one or more processors further perform an operation of extrapolating current sensory data to new data regions.
claim 13
14
14,748,216
1
10,001,760
14
15. The method as set forth in claim 14 , wherein the one or more processors further perform an operation of using the matching adaptable predictive model in the set of competing adaptable predictive models while the new adaptable predictive model is being generated.
claim 14
15
14,748,216
1
10,001,760
15
16. The method as set forth in claim 11 , wherein the identification of the match in the set of competing adaptable predictive models is performed using a down-selection method that compares the set of competing adaptable predictive models over at least two time intervals of current sensory data, wherein the down-selection method ensures that when selecting a new adaptable predictive model, a selection is made only between those competing adaptable predictive models that are consistent with the history of sensory data.
claim 11
16
14,748,216
1
10,001,760
16
17. The method as set forth in claim 16 , wherein a majority of the set of competing adaptable predictive models are compared over a first time interval of current sensory data having a length, and only the matches within that majority are compared over a second time interval of current sensory data within the first time interval having a length that is shorter than the length of the first time interval.
claim 16
17
14,748,216
1
10,001,760
17
18. The method as set forth in claim 17 , wherein the set of competing adaptable predictive models is compared over a cascade of time intervals of current sensory data having progressively shorter time intervals, wherein a number of the set of competing adaptable predictive models compared is progressively reduced.
claim 17
18
14,748,216
1
10,001,760
18
19. A computer program product for controlling a vehicle using a set of competing adaptable predictive models of the dynamics of the vehicle, the computer program product comprising computer-readable instructions stored on a non-transitory computer-readable medium that are executable by a computer having a processor for causing the processor to perform operations of: using each competing adaptable predictive model, generating a set of predicted sensory data based on prior sensory data and prior vehicle control inputs, each competing adaptable predictive model comprising a neural network; comparing each set of predicted sensory data with subsequent sensory data collected from the vehicle; using the comparison of each set of predicted sensory data, automatically identifying one of the competing adaptable predictive models as a match; if a match is identified, modifying the adaptable predictive model that is the match according to a second set of sensory data, and controlling the vehicle using control outputs generated using the match; and if a match is not identified, generating a new adaptable predictive model through automatic model learning using the second set of sensory data, and controlling the vehicle using control outputs generated using the new adaptable predictive model.
null
19
14,748,216
1
10,001,760
19
20. The computer program product as set forth in claim 19 , further comprising instructions for causing the processor to perform an operation of storing the matching adaptable predictive model or the new adaptable predictive model in a model database comprising the set of competing adaptable predictive models.
claim 19
20
14,748,216
1
10,001,760
20
21. The computer program product as set forth in claim 20 , further comprising instructions for causing the processor to perform an operation of comparing a history of sensory data and previous control outputs for each competing adaptable predictive model in the set of competing adaptable predictive models against current sensory data describing the vehicle's current state in order to identify a match.
claim 20
21
14,748,216
1
10,001,760
21
22. The computer program product as set forth in claim 21 , further comprising instructions for causing the processor to perform an operation of extrapolating current sensory data to new data regions.
claim 21
22
14,748,216
1
10,001,760
22
23. The computer program product as set forth in claim 22 , further comprising instructions for causing the processor to perform an operation of using the matching adaptable predictive model in the set of competing adaptable predictive models while the new adaptable predictive model is being generated.
claim 22
23
14,748,216
1
10,001,760
23
24. The computer program product as set forth in claim 19 , wherein the identification of the match in the set of competing adaptable predictive models is performed using a down-selection method that compares the set of competing adaptable predictive models over at least two time intervals of current sensory data, wherein the down-selection method ensures that when selecting a new adaptable predictive model, a selection is made only between those competing adaptable predictive models that are consistent with the history of sensory data.
claim 19
24
14,015,932
1
10,001,904
0
1. A data processing method comprising: receiving a plurality of comments respectively associated with a plurality of video clips from a plurality of videos stored in a video database; receiving comment metadata regarding each comment of the plurality of comments, including a category of a plurality of categories and one or more time values related to a video clip of the plurality of video clips, one or more computers receiving one or more criteria to apply to the comment metadata, wherein the one or more criteria specify at least a particular category of the plurality of categories; the one or more computers selecting two or more video clips by applying the one or more criteria to the comment metadata to identify video clips with comments that are associated with the particular category, the selecting two or more video clips further comprising: identifying two or more comments on different videos of the plurality of comments where the comment metadata specifies the two or more comments as meeting the one or more criteria; and determining, for each comment of the two or more comments, the video clip associated with the comment based on the one or more time values in the comment metadata corresponding to the comment, wherein for each comment of the two or more comments a duration of the video clip associated with the comment is determined based on a user-specified duration of time a default duration of time or a duration of time stored in the comment metadata; the one or more computers displaying the two or more video clips by merging the two or more video clips into a compilation video.
null
1
14,015,932
1
10,001,904
1
2. The method of claim 1 , wherein each category of the plurality of categories is defined by a taxonomy of a plurality of taxonomies, the one or more criteria specify a particular taxonomy of a plurality of taxonomies, and the comment metadata specifies taxonomy for comments associated with each video of the plurality of videos.
claim 1
2
14,015,932
1
10,001,904
2
3. The method of claim 1 , further comprising adjusting a duration of at least one video clip of the two or more video clips.
claim 1
3
14,015,932
1
10,001,904
3
4. The method of claim 1 , wherein displaying the two or more video clips includes displaying a link for each video clip of the two or more video clips which, when selected, causes a video player window to play the video clip.
claim 1
4
14,015,932
1
10,001,904
4
5. The method of claim 4 , wherein the link is displayed in association with a freeze frame generated based on the video clip.
claim 4
5
14,015,932
1
10,001,904
5
6. The method of claim 5 , wherein the freeze frame is generated based on a screen capture of the video at the one time value related to the video clip.
claim 5
6
14,015,932
1
10,001,904
6
7. The method of claim 4 , wherein the link is displayed with a visual indicator determined by the comment metadata.
claim 4
7
14,015,932
1
10,001,904
7
8. A non-transitory computer-readable storage medium storing one or more instructions which, when executed by one or more processors, cause the one or more processors to perform steps comprising: receiving a plurality of comments respectively associated with a plurality of video clips from a plurality of videos stored in a video database, receiving comment metadata regarding each comment of the plurality of comments, including a category of a plurality of categories and one or more time values related to a video clip of the plurality of video clips, one or more computers receiving one or more criteria to apply to the comment metadata, wherein the one or more criteria specify at least a particular category of the plurality of categories; the one or more computers selecting two or more video clips by applying the one or more criteria to the comment metadata to identify video clips with comments that are associated with the particular category, the selecting two or more video clips further comprising: identifying two or more comments on different videos of the plurality of comments where the comment metadata specifies the two or more comments as meeting the one or more criteria; and determining, for each comment of the two or more comments, the video clip associated with the comment based on the one or more time values in the comment metadata corresponding to the comment, wherein, for each comment of the two or more comments, a duration of the video clip associated with the comment is determined based on a user-specified duration of time, a default duration of time, or a duration of time stored in the comment metadata; the one or more computers displaying the two or more video clips by merging the two or more video clips into a compilation video.
null
8
14,015,932
1
10,001,904
8
9. The non-transitory computer-readable storage medium of claim 8 , wherein each category of the plurality of categories is defined by a taxonomy of a plurality of taxonomies, the one or more criteria specify a particular taxonomy of a plurality of taxonomies, and the comment metadata specifies taxonomy for comments associated with each video of the plurality of videos.
claim 8
9
14,015,932
1
10,001,904
9
10. The non-transitory computer-readable storage medium of claim 8 , wherein the steps further comprise adjusting a duration of at least one video clip of the two or more video clips.
claim 8
10
14,015,932
1
10,001,904
10
11. The non-transitory computer-readable storage medium of claim 8 , wherein displaying the two or more video clips includes displaying a link for each video clip of the two or more video clips which, when selected by a user, causes a video player window to play the video clip.
claim 8
11
14,015,932
1
10,001,904
11
12. The non-transitory computer-readable storage medium of claim 11 , wherein the link is displayed in association with a freeze frame generated based on the video clip.
claim 11
12
14,015,932
1
10,001,904
12
13. The non-transitory computer-readable storage medium of claim 12 , wherein the freeze frame is generated based on a screen capture of the video at the one time value related to the video clip.
claim 12
13
14,015,932
1
10,001,904
13
14. The non-transitory computer-readable storage medium of claim 11 , wherein the link is displayed with a visual indicator determined by the meta data.
claim 11
14
14,015,932
1
10,001,904
14
15. A computer system, comprising: one or more processors; a memory comprising a set of instructions which when executed causes the one or more processors to execute a method, the method comprising: receiving a plurality of comments respectively associated with a plurality of video clips from a plurality of videos stored in a video database; receiving comment metadata regarding each comment of the plurality of comments, including a category of a plurality of categories and one or more time values related to a video clip of the plurality of video clips, one or more computers receiving one or more criteria to apply to the comment metadata, wherein the one or more criteria specify at least a particular category of the plurality of categories; the one or more computers selecting two or more video clips by applying the one or more criteria to the comment metadata to identify video clips with comments that are associated with the particular category, the selecting two or more video clips further comprising: identifying two or more comments on different videos of the plurality of comments where the comment metadata specifies the two or more comments as meeting the one or more criteria; and determining, for each comment of the two or more comments, the video clip associated with the comment based on the one or more time values in the comment metadata corresponding to the comment, wherein, for each comment of the two or more comments, a duration of the video clip associated with the comment is determined based on a user-specified duration of time, a default duration of time, or a duration of time stored in the comment metadata; the one or more computers displaying the two or more video clips by merging the two or more video clips into a compilation video.
null
15
14,015,932
1
10,001,904
15
16. The computer system of claim 15 , wherein each category of the plurality of categories is defined by a taxonomy of a plurality of taxonomies, the one or more criteria specify a particular taxonomy of a plurality of taxonomies, and the comment metadata specifies a taxonomy for comments associated with each video of the plurality of videos.
claim 15
16
14,015,932
1
10,001,904
16
17. The computer system of claim 15 , the method further comprising adjusting a duration of at least one video clip of the two or more video clips.
claim 15
17
14,015,932
1
10,001,904
17
18. The computer system of claim 15 , wherein displaying the two or more video clips includes displaying a link for each video clip of the two or more video clips which, when selected, causes a video player window to play the video clip.
claim 15
18
14,015,932
1
10,001,904
18
19. The computer system of claim 18 , wherein the link is displayed in association with a freeze frame generated based on the video clip.
claim 18
19
14,015,932
1
10,001,904
19
20. The computer system of claim 19 , wherein the freeze frame is generated based on a screen capture of the video at the one time value related to the video clip.
claim 19
20
11,961,742
1
10,001,920
0
1. A method of facilitating data entry at a data entry position in a data set within a data entry environment having a symbolic grammar on a computer having a processor, the method comprising: executing on the processor instructions configured to: if the grammar permits a currently undefined symbol name at the data entry position, select a new symbol name that is not currently assigned to any member of any object associated with the data entry position; and present a symbol list having symbol list options comprising: symbol options permitted by the grammar at the data entry position, and a new symbol name option for the new symbol name; upon receiving from the user a selection of the new symbol name option, assign the new symbol name to a new object; and the new object having an object type selected from an object type set comprising: a variable; a class; a class member; and a function.
null
1
11,961,742
1
10,001,920
1
2. The method of claim 1 , wherein: the symbol list option selection inputs comprising additional user input that are permitted by the grammar at the data entry position following the symbol, and the inserting comprising inserting the additional user input after the selected symbol list option at the data entry position.
claim 1
2
11,961,742
1
10,001,920
2
3. The method of claim 1 , wherein the symbol list further comprises at least one symbol list option selection input associated with a new symbol name option where a new symbol name option is included in the symbol list, and associated with a symbol option where a new symbol name option is not included in the symbol list.
claim 1
3
11,961,742
1
10,001,920
3
4. The method of claim 1 , wherein the symbol list further comprises a symbol list option selection input description that described at least one symbol list option selection input and the at least one associated symbol list option.
claim 1
4
11,961,742
1
10,001,920
4
5. The method of claim 1 , wherein the symbol list further comprises a suggested symbol list option presented in a different manner than other symbol list options in the symbol list.
claim 1
5
11,961,742
1
10,001,920
5
6. The method of claim 5 , the suggested symbol list option comprising the new symbol name option where the new symbol name option is included in the symbol list.
claim 5
6
11,961,742
1
10,001,920
6
7. The method of claim 5 , comprising: detecting user input representing a suggested symbol list option selection input; and upon detecting the suggested symbol list option selection input, inserting the suggested symbol list option at the data entry position.
claim 5
7
11,961,742
1
10,001,920
7
8. The method of claim 5 , the symbol list including a suggested symbol list option description describing the suggested symbol list option.
claim 5
8
11,961,742
1
10,001,920
8
9. The method of claim 8 , comprising: upon detecting user input representing a focusing on a symbol list option, updating the symbol list to suggest the focused symbol list option, and including a suggested symbol list option description describing the suggested symbol list option.
claim 8
9
End of preview.