url
stringlengths 63
64
| repository_url
stringclasses 1
value | labels_url
stringlengths 77
78
| comments_url
stringlengths 72
73
| events_url
stringlengths 70
71
| html_url
stringlengths 51
54
| id
int64 1.73B
2.09B
| node_id
stringlengths 18
19
| number
int64 5.23k
16.2k
| title
stringlengths 1
385
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | milestone
null | comments
int64 0
56
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 1
55.4k
⌀ | reactions
dict | timeline_url
stringlengths 72
73
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/langchain-ai/langchain/issues/5339 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5339/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5339/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5339/events | https://github.com/langchain-ai/langchain/pull/5339 | 1,728,672,923 | PR_kwDOIPDwls5Rfvge | 5,339 | pref: reduce DB query error rate | {
"login": "Undertone0809",
"id": 72488598,
"node_id": "MDQ6VXNlcjcyNDg4NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/72488598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Undertone0809",
"html_url": "https://github.com/Undertone0809",
"followers_url": "https://api.github.com/users/Undertone0809/followers",
"following_url": "https://api.github.com/users/Undertone0809/following{/other_user}",
"gists_url": "https://api.github.com/users/Undertone0809/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Undertone0809/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Undertone0809/subscriptions",
"organizations_url": "https://api.github.com/users/Undertone0809/orgs",
"repos_url": "https://api.github.com/users/Undertone0809/repos",
"events_url": "https://api.github.com/users/Undertone0809/events{/privacy}",
"received_events_url": "https://api.github.com/users/Undertone0809/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-05-27T11:49:38 | 2023-06-03T21:55:57 | 2023-06-03T21:55:56 | CONTRIBUTOR | null | # Reduce DB query error rate
If you use sql agent of `SQLDatabaseToolkit` to query data, it is prone to errors in query fields and often uses fields that do not exist in database tables for queries. However, the existing prompt does not effectively make the agent aware that there are problems with the fields they query. At this time, we urgently need to improve the prompt so that the agent realizes that they have queried non-existent fields and allows them to use the `schema_sql_db`, that is,` ListSQLDatabaseTool` first queries the corresponding fields in the table in the database, and then uses `QuerySQLDatabaseTool` for querying.
There is a demo of my project to show this problem.
**Original Agent**
```python
def create_mysql_kit():
db = SQLDatabase.from_uri("mysql+pymysql://xxxxxxx")
llm = OpenAI(temperature=0)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
agent_executor.run("Who are the users of sysuser in this system? Tell me the username of all users")
if __name__ == '__main__':
create_mysql_kit()
```
**original output**
```
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input: ""
Observation: app_sysrole_menus, app_bimfacemodel, app_project_users, app_measuringpointdata, auth_user, auth_user_groups, django_apscheduler_djangojobexecution, app_project, app_elementpoint, django_apscheduler_djangojob, django_content_type, app_sysrole, django_admin_log, app_bimfaceaccount, app_measuringpoint_warning_thresholds, app_measuringpoint, app_company, auth_group_permissions, app_sysuser, app_sysuser_companies, app_sysmenu, app_datawarningthreshold, auth_group, auth_permission, app_datawarningrecord, auth_user_user_permissions, app_bimfaceaccount_bimface_models, django_migrations, app_measuringitem, django_session
Thought: I should query the app_sysuser table to get the users in the system.
Action: query_sql_db
Action Input: SELECT username FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT username FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT username FROM app_sysuser LIMIT 10;
Observation:
SELECT username FROM app_sysuser LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT username FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT username FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT name FROM app_sysuser LIMIT 10;
Observation:
SELECT name FROM app_sysuser LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT name FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'name' in 'field list'")
[SQL: SELECT name FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT id, username FROM app_sysuser LIMIT 10;
Observation:
SELECT id, username FROM app_sysuser LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT id, username FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'id' in 'field list'")
[SQL: SELECT id, username FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT user_id, username FROM app_sysuser LIMIT 10;
Observation:
SELECT user_id, username FROM app_sysuser WHERE user_id IS NOT NULL AND user_id BETWEEN 0 AND 10 UNION ALL SELECT user_id, username FROM app_sysuser WHERE username IS NOT NULL ORDER BY user_id ASC LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT user_id, username FROM app_sysuser WHERE user_id IS NOT NULL AND user_id BETWEEN 0 AND 10 UNION ALL SELECT user_id, username FROM app_sysuser WHERE username IS NOT NULL ORDER BY user_id ASC LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT user_id, username FROM app_sysuser WHERE user_id IS NOT NULL AND user_id BETWEEN 0 AND 10 UNION ALL SELECT user_id, username FROM app_sysuser WHERE username IS NOT NULL ORDER BY user_id ASC LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT user_id, user_name FROM app_sysuser LIMIT 10;
Observation:
SELECT user_id, user_name FROM app_sysuser WHERE user_id NOT IN (NULL) AND user_name BETWEEN 'A' AND 'Z' UNION ALL SELECT user_id, user_name FROM app_sysuser WHERE user_id IS NOT NULL AND user_name LIKE '%a%' LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT user_id, user_name FROM app_sysuser WHERE user_id NOT IN (NULL) AND user_name BETWEEN 'A' AND 'Z' UNION ALL SELECT user_id, user_name FROM app_sysuser WHERE user_id IS NOT NULL AND user_name LIKE '%a%' LIMIT 10;
Observation: []
Thought: I now know the final answer.
Final Answer: There are no users in the sysuser table.
> Finished chain.
Process finished with exit code 0
```
As you can see, agent always use incorrect sql to query the data. Actually, the username field name of my `app_sysuser` table is `user_name`, not the `username` believed by the agent.
**Optimized agent output**
```
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input: ""
Observation: app_project_users, django_content_type, app_measuringpoint_warning_thresholds, app_bimfaceaccount_bimface_models, django_session, app_sysuser, django_migrations, app_bimfacemodel, app_elementpoint, app_measuringpoint, django_apscheduler_djangojobexecution, auth_permission, app_sysuser_companies, app_measuringpointdata, app_measuringitem, app_sysrole_menus, django_admin_log, auth_group, django_apscheduler_djangojob, app_sysmenu, app_project, app_bimfaceaccount, app_datawarningthreshold, app_datawarningrecord, auth_user_groups, auth_user_user_permissions, auth_group_permissions, auth_user, app_company, app_sysrole
Thought: I should query the app_sysuser table to get the usernames of all the users.
Action: query_sql_db
Action Input: SELECT username FROM app_sysuser
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT username FROM app_sysuser]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should use the schema_sql_db tool to get the correct table fields.
Action: schema_sql_db
Action Input: app_sysuser
Observation:
CREATE TABLE app_sysuser (
user_id INTEGER NOT NULL AUTO_INCREMENT,
user_account VARCHAR(50) COLLATE utf8mb4_general_ci NOT NULL,
user_password VARCHAR(255) COLLATE utf8mb4_general_ci NOT NULL,
user_name VARCHAR(50) COLLATE utf8mb4_general_ci NOT NULL,
user_avatar VARCHAR(255) COLLATE utf8mb4_general_ci,
user_email VARCHAR(254) COLLATE utf8mb4_general_ci,
user_phone VARCHAR(20) COLLATE utf8mb4_general_ci,
user_create_time DATETIME(6) NOT NULL,
user_update_time DATETIME(6) NOT NULL,
user_status SMALLINT,
role_id_id INTEGER,
PRIMARY KEY (user_id),
CONSTRAINT app_sysuser_role_id_id_51b50f39_fk_app_sysrole_role_id FOREIGN KEY(role_id_id) REFERENCES app_sysrole (role_id)
)COLLATE utf8mb4_general_ci DEFAULT CHARSET=utf8mb4 ENGINE=InnoDB
/*
3 rows from app_sysuser table:
user_id user_account user_password user_name user_avatar user_email user_phone user_create_time user_update_time user_status role_id_id
xxxxxxxxxxxxxx
*/
Thought: I should query the app_sysuser table to get the usernames of all the users.
Action: query_sql_db
Action Input: SELECT user_account FROM app_sysuser LIMIT 10
Observation: [('baiyun',), ('eatrice',), ('lisi',), ('pingxiang',), ('wangwu',), ('zeeland',), ('zsj',), ('zzw',)]
Thought: I now know the final answer
Final Answer: The usernames of the users in the sysuser table are baiyun, eatrice, lisi, pingxiang, wangwu, zeeland, zsj, and zzw.
> Finished chain.
Process finished with exit code 0
```
I have tested about 10 related prompts and they all work properly, with a much lower error rate compared to before
## Who can review?
@vowelparrot
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5339/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5339",
"html_url": "https://github.com/langchain-ai/langchain/pull/5339",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5339.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5339.patch",
"merged_at": "2023-06-03T21:55:56"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5338 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5338/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5338/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5338/events | https://github.com/langchain-ai/langchain/pull/5338 | 1,728,670,022 | PR_kwDOIPDwls5Rfu44 | 5,338 | adding MongoDBAtlasVectorSearch | {
"login": "P-E-B",
"id": 38215315,
"node_id": "MDQ6VXNlcjM4MjE1MzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/38215315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/P-E-B",
"html_url": "https://github.com/P-E-B",
"followers_url": "https://api.github.com/users/P-E-B/followers",
"following_url": "https://api.github.com/users/P-E-B/following{/other_user}",
"gists_url": "https://api.github.com/users/P-E-B/gists{/gist_id}",
"starred_url": "https://api.github.com/users/P-E-B/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/P-E-B/subscriptions",
"organizations_url": "https://api.github.com/users/P-E-B/orgs",
"repos_url": "https://api.github.com/users/P-E-B/repos",
"events_url": "https://api.github.com/users/P-E-B/events{/privacy}",
"received_events_url": "https://api.github.com/users/P-E-B/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4678528817,
"node_id": "LA_kwDOIPDwls8AAAABFtyvMQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
}
] | closed | false | null | [] | null | 22 | 2023-05-27T11:42:59 | 2023-06-06T14:18:01 | 2023-05-30T14:59:02 | CONTRIBUTOR | null | # Add MongoDBAtlasVectorSearch for the python library
Fixes #5337
## Who can review?
@dev2049 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5338/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5338",
"html_url": "https://github.com/langchain-ai/langchain/pull/5338",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5338.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5338.patch",
"merged_at": "2023-05-30T14:59:02"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5337 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5337/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5337/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5337/events | https://github.com/langchain-ai/langchain/issues/5337 | 1,728,669,494 | I_kwDOIPDwls5nCWc2 | 5,337 | Add MongoDBAtlasVectorSearch vectorstore | {
"login": "P-E-B",
"id": 38215315,
"node_id": "MDQ6VXNlcjM4MjE1MzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/38215315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/P-E-B",
"html_url": "https://github.com/P-E-B",
"followers_url": "https://api.github.com/users/P-E-B/followers",
"following_url": "https://api.github.com/users/P-E-B/following{/other_user}",
"gists_url": "https://api.github.com/users/P-E-B/gists{/gist_id}",
"starred_url": "https://api.github.com/users/P-E-B/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/P-E-B/subscriptions",
"organizations_url": "https://api.github.com/users/P-E-B/orgs",
"repos_url": "https://api.github.com/users/P-E-B/repos",
"events_url": "https://api.github.com/users/P-E-B/events{/privacy}",
"received_events_url": "https://api.github.com/users/P-E-B/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-27T11:41:39 | 2023-05-30T14:59:03 | 2023-05-30T14:59:03 | CONTRIBUTOR | null | ### Feature request
MongoDB Atlas is a fully managed DBaaS, powered by the MongoDB database. It also enables Lucene (collocated with the mongod process) for full-text search - this is know as Atlas Search. The PR has to allow Langchain users from using the functionality related to the MongoDB Atlas Vector Search feature where you can store your embeddings in MongoDB documents and create a Lucene vector index to perform a KNN search.
### Motivation
There is currently no way in Langchain to connect to MongoDB Atlas and perform a KNN search.
### Your contribution
I am submitting a PR for this issue soon. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5337/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5336 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5336/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5336/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5336/events | https://github.com/langchain-ai/langchain/issues/5336 | 1,728,661,570 | I_kwDOIPDwls5nCUhC | 5,336 | FAISS.add_embeddings is typed to take iterables but does not. | {
"login": "startakovsky",
"id": 689723,
"node_id": "MDQ6VXNlcjY4OTcyMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/689723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/startakovsky",
"html_url": "https://github.com/startakovsky",
"followers_url": "https://api.github.com/users/startakovsky/followers",
"following_url": "https://api.github.com/users/startakovsky/following{/other_user}",
"gists_url": "https://api.github.com/users/startakovsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/startakovsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/startakovsky/subscriptions",
"organizations_url": "https://api.github.com/users/startakovsky/orgs",
"repos_url": "https://api.github.com/users/startakovsky/repos",
"events_url": "https://api.github.com/users/startakovsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/startakovsky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-27T11:29:03 | 2023-12-07T16:08:35 | 2023-12-07T16:08:34 | CONTRIBUTOR | null | ### System Info
MacOS
Langchain Version 0.0.181
Python Version 3.11.3
### Who can help?
@eyurtsev I wasn't sure who to reach out to. The following is the signature for adding embeddings to FAISS:
```python
FAISS.add_embeddings(
self,
text_embeddings: 'Iterable[Tuple[str, List[float]]]',
metadatas: 'Optional[List[dict]]' = None,
**kwargs: 'Any',
) -> 'List[str]'
```
Notice that `text_embeddings` takes an iterable. However, when I do this I get a failure with my iterable, but when wrapped in a `list` function then it is successful.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
vs = FAISS.from_texts(['a'], embedding=OpenAIEmbeddings())
vector = OpenAIEmbeddings().embed_query('b')
# error happens with this next line, see "Expected behavior" below.
vs.add_embeddings(iter([('b', vector)]))
# no error happens when wrapped in a list
vs.add_embeddings(list(iter([('b', vector)])))
```
### Expected behavior
```bash
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
...
File ~/.pyenv/versions/3.11.3/envs/myenv/lib/python3.11/site-packages/faiss/class_wrappers.py:227, in handle_Index.<locals>.replacement_add(self, x)
214 def replacement_add(self, x):
215 """Adds vectors to the index.
216 The index must be trained before vectors can be added to it.
217 The vectors are implicitly numbered in sequence. When `n` vectors are
(...)
224 `dtype` must be float32.
225 """
--> 227 n, d = x.shape
228 assert d == self.d
229 x = np.ascontiguousarray(x, dtype='float32')
ValueError: not enough values to unpack (expected 2, got 1)
``` | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5336/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5335 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5335/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5335/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5335/events | https://github.com/langchain-ai/langchain/issues/5335 | 1,728,609,263 | I_kwDOIPDwls5nCHvv | 5,335 | `Agents with Chat Models` Example Code Abnormal When Using `google-serper` Tool | {
"login": "dsdashun",
"id": 1663307,
"node_id": "MDQ6VXNlcjE2NjMzMDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1663307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsdashun",
"html_url": "https://github.com/dsdashun",
"followers_url": "https://api.github.com/users/dsdashun/followers",
"following_url": "https://api.github.com/users/dsdashun/following{/other_user}",
"gists_url": "https://api.github.com/users/dsdashun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsdashun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsdashun/subscriptions",
"organizations_url": "https://api.github.com/users/dsdashun/orgs",
"repos_url": "https://api.github.com/users/dsdashun/repos",
"events_url": "https://api.github.com/users/dsdashun/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsdashun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-27T09:47:29 | 2023-09-15T16:10:24 | 2023-09-15T16:10:23 | NONE | null | ### System Info
* Langchain: 0.0.181
* OS: Ubuntu Linux 20.04
* Kernel: `Linux iZt4n78zs78m7gw0tztt8lZ 5.4.0-47-generic #51-Ubuntu SMP Fri Sep 4 19:50:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux`
* Ubuntu version:
```plain
LSB Version: core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal
```
* Python: Python 3.8.2
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Use the example code provided in [Quick Start: Agents with Chat Models](https://python.langchain.com/en/latest/getting_started/getting_started.html#agents-with-chat-models), but replace the 'serpapi' tool with 'google-serper' tool .
Here's the modified code:
```python
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
chat = ChatOpenAI(temperature=0.3)
llm = OpenAI(temperature=0)
tools = load_tools(["google-serper", "llm-math"], llm=llm)
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
result = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
print(result)
```
When I execute the code above. Error occurred. Here's the error text:
~~~plain
(openai-test) dsdashun@iZt4n78zs78m7gw0tztt8lZ:~/workspaces/openai-test/langchain$ python3 get_started_chat_agent.py
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/chat/output_parser.py", line 22, in parse
response = json.loads(action.strip())
File "/usr/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.8/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 4 column 2 (char 75)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "get_started_chat_agent.py", line 14, in <module>
result = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/agent.py", line 792, in _call
next_step_output = self._take_next_step(
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/agent.py", line 672, in _take_next_step
output = self.agent.plan(intermediate_steps, **inputs)
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/agent.py", line 385, in plan
return self.output_parser.parse(full_output)
File "/home/dsdashun/.local/share/virtualenvs/openai-test-pldc8tvg/lib/python3.8/site-packages/langchain/agents/chat/output_parser.py", line 26, in parse
raise OutputParserException(f"Could not parse LLM output: {text}")
langchain.schema.OutputParserException: Could not parse LLM output: Question: Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?
Thought: I should use Serper Search to find out who Olivia Wilde's boyfriend is and then use Calculator to calculate his age raised to the 0.23 power.
Action:
```
{
"action": "Serper Search",
"action_input": "Olivia Wilde boyfriend"
},
{
"action": "Calculator",
"action_input": "Age of Olivia Wilde's boyfriend raised to the 0.23 power"
}
```
~~~
However, if I use the `pdb` debugger to debug the program step by step, and pause a little bit after running `initialize_agent`, everything is fine.
I didn't use the 'serpapi' tool, because I don't have an API key on it. So I cannot verify whether the original example code can be executed successfully on my machine using the 'serpapi' tool
### Expected behavior
I expect the code can run successfully without any problems, even if I replace the search tool with a similar one. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5335/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5334 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5334/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5334/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5334/events | https://github.com/langchain-ai/langchain/pull/5334 | 1,728,559,884 | PR_kwDOIPDwls5RfWSS | 5,334 | Add removing any text before json to parse_json_markdown | {
"login": "mohsend",
"id": 4129635,
"node_id": "MDQ6VXNlcjQxMjk2MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4129635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohsend",
"html_url": "https://github.com/mohsend",
"followers_url": "https://api.github.com/users/mohsend/followers",
"following_url": "https://api.github.com/users/mohsend/following{/other_user}",
"gists_url": "https://api.github.com/users/mohsend/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mohsend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohsend/subscriptions",
"organizations_url": "https://api.github.com/users/mohsend/orgs",
"repos_url": "https://api.github.com/users/mohsend/repos",
"events_url": "https://api.github.com/users/mohsend/events{/privacy}",
"received_events_url": "https://api.github.com/users/mohsend/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-27T09:00:13 | 2023-05-29T13:46:03 | 2023-05-29T13:46:03 | NONE | null | # Add removing any text before the json string to parse_json_markdown (Issue #1358)
<!--
Thank you for contributing to LangChain! Your PR will appear in our release under the title you set. Please make sure it highlights your valuable contribution.
Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change.
After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost.
-->
<!-- Remove if not applicable -->
Fixes #1358 (ValueError: Could not parse LLM output: )
Sometimes the agent adds a little sentence before the thought JSON it's supposed to give. I causes an error. This little function removes that part before the main JSON response before trying to parse it. Here is an example error I got before this fix:
`````
Traceback (most recent call last):
File ".../langchain/agents/conversational_chat/output_parser.py", line 17, in parse
response = parse_json_markdown(text)
^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../langchain/output_parsers/json.py", line 17, in parse_json_markdown
parsed = json.loads(json_string)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
...
File ".../langchain/chains/base.py", line 239, in run
return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../langchain/chains/base.py", line 140, in __call__
raise e
File ".../langchain/chains/base.py", line 134, in __call__
self._call(inputs, run_manager=run_manager)
File ".../langchain/agents/agent.py", line 951, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File ".../langchain/agents/agent.py", line 773, in _take_next_step
raise e
File ".../langchain/agents/agent.py", line 762, in _take_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File ".../langchain/agents/agent.py", line 444, in plan
return self.output_parser.parse(full_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../langchain/agents/conversational_chat/output_parser.py", line 24, in parse
raise OutputParserException(f"Could not parse LLM output: {text}") from e
langchain.schema.OutputParserException: Could not parse LLM output: Sure, here's a sentence-long description of the first tool in the list:
```json
{
"action": "Final Answer",
"action_input": "The 'Search the internet' tool is useful for finding information about current events or the current state of the world. You can input a single search term to get started."
}
```
`````
In this PR, in the example above `parse_json_markdown` will remove "Sure, here's a sentence-long description of the first tool in the list:" before trying to parse the string as a json.
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
@vowelparrot
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @vowelparrot
VectorStores / Retrievers / Memory
- @dev2049
-->
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5334/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5334",
"html_url": "https://github.com/langchain-ai/langchain/pull/5334",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5334.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5334.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5333 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5333/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5333/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5333/events | https://github.com/langchain-ai/langchain/pull/5333 | 1,728,554,626 | PR_kwDOIPDwls5RfVIC | 5,333 | Improved json parse sanitization | {
"login": "svdeepak99",
"id": 42609308,
"node_id": "MDQ6VXNlcjQyNjA5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/42609308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svdeepak99",
"html_url": "https://github.com/svdeepak99",
"followers_url": "https://api.github.com/users/svdeepak99/followers",
"following_url": "https://api.github.com/users/svdeepak99/following{/other_user}",
"gists_url": "https://api.github.com/users/svdeepak99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/svdeepak99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svdeepak99/subscriptions",
"organizations_url": "https://api.github.com/users/svdeepak99/orgs",
"repos_url": "https://api.github.com/users/svdeepak99/repos",
"events_url": "https://api.github.com/users/svdeepak99/events{/privacy}",
"received_events_url": "https://api.github.com/users/svdeepak99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-27T08:53:29 | 2023-05-29T13:45:45 | 2023-05-29T13:45:45 | CONTRIBUTOR | null | @vowelparrot Added regex expressions (superset to the previously present non regex expressions) to also solve the case where unnecessary characters are present after the triple backticks but before the {, and after the } but before the closing triple backticks.
Before fix (additional "AI: " is present after Thought:):
```
Observation: There are 38175 accounts available in the dataframe.
Thought:AI: {
"action": "Final Answer",
"action_input": "There are 38175 accounts available in the dataframe."
}
Observation: Invalid or incomplete response
```
After fix:
```
Observation: There are 38175 accounts available in the dataframe.
Thought:AI: {
"action": "Final Answer",
"action_input": "There are 38175 accounts available in the dataframe."
}
> Finished chain.
[AI Message]: There are 38175 accounts available in the dataframe.
```
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5333/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5333",
"html_url": "https://github.com/langchain-ai/langchain/pull/5333",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5333.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5333.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5332 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5332/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5332/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5332/events | https://github.com/langchain-ai/langchain/pull/5332 | 1,728,523,406 | PR_kwDOIPDwls5RfOgS | 5,332 | refactor: BaseStringMessagePromptTemplate from_template method | {
"login": "Sukitly",
"id": 54905519,
"node_id": "MDQ6VXNlcjU0OTA1NTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/54905519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sukitly",
"html_url": "https://github.com/Sukitly",
"followers_url": "https://api.github.com/users/Sukitly/followers",
"following_url": "https://api.github.com/users/Sukitly/following{/other_user}",
"gists_url": "https://api.github.com/users/Sukitly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sukitly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sukitly/subscriptions",
"organizations_url": "https://api.github.com/users/Sukitly/orgs",
"repos_url": "https://api.github.com/users/Sukitly/repos",
"events_url": "https://api.github.com/users/Sukitly/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sukitly/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 5 | 2023-05-27T07:48:30 | 2023-06-03T23:55:59 | 2023-06-03T23:55:58 | CONTRIBUTOR | null |
# refactor BaseStringMessagePromptTemplate from_template method
Refactor the `from_template` method of the `BaseStringMessagePromptTemplate` class to allow passing keyword arguments to the `from_template` method of `PromptTemplate`.
Enable the usage of arguments like `template_format`.
In my scenario, I intend to utilize Jinja2 for formatting the human message prompt in the chat template.
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Models
- @hwchase17
- @agola11
- @jonasalexander
-->
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5332/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5332",
"html_url": "https://github.com/langchain-ai/langchain/pull/5332",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5332.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5332.patch",
"merged_at": "2023-06-03T23:55:58"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5331 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5331/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5331/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5331/events | https://github.com/langchain-ai/langchain/pull/5331 | 1,728,469,328 | PR_kwDOIPDwls5RfDPR | 5,331 | Fixing blank thoughts in verbose for "_Exception" Action | {
"login": "svdeepak99",
"id": 42609308,
"node_id": "MDQ6VXNlcjQyNjA5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/42609308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svdeepak99",
"html_url": "https://github.com/svdeepak99",
"followers_url": "https://api.github.com/users/svdeepak99/followers",
"following_url": "https://api.github.com/users/svdeepak99/following{/other_user}",
"gists_url": "https://api.github.com/users/svdeepak99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/svdeepak99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svdeepak99/subscriptions",
"organizations_url": "https://api.github.com/users/svdeepak99/orgs",
"repos_url": "https://api.github.com/users/svdeepak99/repos",
"events_url": "https://api.github.com/users/svdeepak99/events{/privacy}",
"received_events_url": "https://api.github.com/users/svdeepak99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-27T05:28:30 | 2023-05-28T04:14:17 | 2023-05-28T04:14:16 | CONTRIBUTOR | null | Fixed the issue of blank Thoughts being printed in verbose when `handle_parsing_errors=True`, as below:
Before Fix:
```
Observation: There are 38175 accounts available in the dataframe.
Thought:
Observation: Invalid or incomplete response
Thought:
Observation: Invalid or incomplete response
Thought:
```
After Fix:
```
Observation: There are 38175 accounts available in the dataframe.
Thought:AI: {
"action": "Final Answer",
"action_input": "There are 38175 accounts available in the dataframe."
}
Observation: Invalid Action or Action Input format
Thought:AI: {
"action": "Final Answer",
"action_input": "The number of available accounts is 38175."
}
Observation: Invalid Action or Action Input format
```
@vowelparrot currently I have set the colour of thought to green (same as the colour when `handle_parsing_errors=False`). If you want to change the colour of this "_Exception" case to red or something else (when `handle_parsing_errors=True`), feel free to change it in line 789.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5331/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5331",
"html_url": "https://github.com/langchain-ai/langchain/pull/5331",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5331.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5331.patch",
"merged_at": "2023-05-28T04:14:16"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5330 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5330/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5330/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5330/events | https://github.com/langchain-ai/langchain/pull/5330 | 1,728,464,301 | PR_kwDOIPDwls5RfCMW | 5,330 | Reformat openai proxy setting as code | {
"login": "sevendark",
"id": 13547104,
"node_id": "MDQ6VXNlcjEzNTQ3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/13547104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sevendark",
"html_url": "https://github.com/sevendark",
"followers_url": "https://api.github.com/users/sevendark/followers",
"following_url": "https://api.github.com/users/sevendark/following{/other_user}",
"gists_url": "https://api.github.com/users/sevendark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sevendark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sevendark/subscriptions",
"organizations_url": "https://api.github.com/users/sevendark/orgs",
"repos_url": "https://api.github.com/users/sevendark/repos",
"events_url": "https://api.github.com/users/sevendark/events{/privacy}",
"received_events_url": "https://api.github.com/users/sevendark/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 0 | 2023-05-27T05:12:55 | 2023-05-29T14:02:47 | 2023-05-29T14:02:47 | CONTRIBUTOR | null | # Reformat the openai proxy setting as code
Only affect the doc for openai Model
- @hwchase17
- @agola11
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5330/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5330",
"html_url": "https://github.com/langchain-ai/langchain/pull/5330",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5330.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5330.patch",
"merged_at": "2023-05-29T14:02:47"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5329 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5329/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5329/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5329/events | https://github.com/langchain-ai/langchain/issues/5329 | 1,728,448,457 | I_kwDOIPDwls5nBgfJ | 5,329 | Serp APi and google search API won't work with LLama models like vicuna | {
"login": "mikeyang01",
"id": 2081093,
"node_id": "MDQ6VXNlcjIwODEwOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2081093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikeyang01",
"html_url": "https://github.com/mikeyang01",
"followers_url": "https://api.github.com/users/mikeyang01/followers",
"following_url": "https://api.github.com/users/mikeyang01/following{/other_user}",
"gists_url": "https://api.github.com/users/mikeyang01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikeyang01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikeyang01/subscriptions",
"organizations_url": "https://api.github.com/users/mikeyang01/orgs",
"repos_url": "https://api.github.com/users/mikeyang01/repos",
"events_url": "https://api.github.com/users/mikeyang01/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikeyang01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-27T04:23:16 | 2023-06-30T07:51:03 | 2023-06-06T13:11:32 | NONE | null | ### System Info
Langchain Version: 0.0.176
Ubuntu x86 23.04
Memory 24gb
AMD EPYC
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Verbose is required to pass to the callback manager
# Make sure the model path is correct for your system!
llm_cpp = LlamaCpp(model_path="/vicuna/ggml-vic7b-q4_0.bin", callback_manager=callback_manager)
llm = llm_cpp
tools = load_tools(["serpapi"], llm=llm_cpp)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
agent.run("What is football?")
```
Result
```
Action Input: "what is football?" I should probably start by defining what football actually is.
Action: Let's [Search] "what is football?"
Action Input: "what is football?"
Observation: Let's [Search] "what is football?" is not a valid tool, try another one.
Thought:
```
### Expected behavior
Expected behavior:
search google and return correct results
If i change model from vicuna to openAI api, works fine
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5329/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5328 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5328/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5328/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5328/events | https://github.com/langchain-ai/langchain/issues/5328 | 1,728,430,768 | I_kwDOIPDwls5nBcKw | 5,328 | Difference among various ways to query database and return source information? (Question Answering with Sources, Retrieval Question Answering with Sources, index.query_with_sources, etc.) | {
"login": "cyberjj999",
"id": 51283594,
"node_id": "MDQ6VXNlcjUxMjgzNTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/51283594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyberjj999",
"html_url": "https://github.com/cyberjj999",
"followers_url": "https://api.github.com/users/cyberjj999/followers",
"following_url": "https://api.github.com/users/cyberjj999/following{/other_user}",
"gists_url": "https://api.github.com/users/cyberjj999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyberjj999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyberjj999/subscriptions",
"organizations_url": "https://api.github.com/users/cyberjj999/orgs",
"repos_url": "https://api.github.com/users/cyberjj999/repos",
"events_url": "https://api.github.com/users/cyberjj999/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyberjj999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-05-27T03:28:23 | 2023-09-18T16:10:35 | 2023-09-18T16:10:34 | NONE | null | ### Issue you'd like to raise.
What are the key differences between these methods of trying to query/ask a database, then return the answer along with its relevant sources?
My main objective is to have a Chatbot that has knowledge from a knowledge base, and can still maintain conversation history. Their answer must return me the source document as well. Which option is the best among so many choices?
There are
1. [Question Answering with Sources](https://python.langchain.com/en/latest/modules/chains/index_examples/qa_with_sources.html),
```
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
```
2. [Retrieval Question Answering with Sources](https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa_with_sources.html)
```
from langchain import OpenAI
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())
chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True)
```
3. [Question Answering over Docs](https://python.langchain.com/en/latest/use_cases/question_answering.html)
```
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query)
```
++ probably quite a few more examples I could find if I dig through the documentation. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5328/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5327 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5327/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5327/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5327/events | https://github.com/langchain-ai/langchain/pull/5327 | 1,728,415,189 | PR_kwDOIPDwls5Re4Fe | 5,327 | Add path validation to DirectoryLoader | {
"login": "os1ma",
"id": 39944763,
"node_id": "MDQ6VXNlcjM5OTQ0NzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/39944763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/os1ma",
"html_url": "https://github.com/os1ma",
"followers_url": "https://api.github.com/users/os1ma/followers",
"following_url": "https://api.github.com/users/os1ma/following{/other_user}",
"gists_url": "https://api.github.com/users/os1ma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/os1ma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/os1ma/subscriptions",
"organizations_url": "https://api.github.com/users/os1ma/orgs",
"repos_url": "https://api.github.com/users/os1ma/repos",
"events_url": "https://api.github.com/users/os1ma/events{/privacy}",
"received_events_url": "https://api.github.com/users/os1ma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-27T02:38:00 | 2023-05-28T19:31:23 | 2023-05-28T19:31:23 | CONTRIBUTOR | null | # Add path validation to DirectoryLoader
This PR introduces a minor adjustment to the DirectoryLoader by adding validation for the path argument. Previously, if the provided path didn't exist or wasn't a directory, DirectoryLoader would return an empty document list due to the behavior of the `glob` method. This could potentially cause confusion for users, as they might expect a file-loading error instead.
So, I've added two validations to the load method of the DirectoryLoader:
- Raise a FileNotFoundError if the provided path does not exist
- Raise a ValueError if the provided path is not a directory
Due to the relatively small scope of these changes, a new issue was not created.
## Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests, lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
@eyurtsev
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5327/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5327",
"html_url": "https://github.com/langchain-ai/langchain/pull/5327",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5327.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5327.patch",
"merged_at": "2023-05-28T19:31:23"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5326 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5326/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5326/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5326/events | https://github.com/langchain-ai/langchain/pull/5326 | 1,728,409,984 | PR_kwDOIPDwls5Re3C1 | 5,326 | Tracing Group | {
"login": "agola11",
"id": 9536492,
"node_id": "MDQ6VXNlcjk1MzY0OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9536492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agola11",
"html_url": "https://github.com/agola11",
"followers_url": "https://api.github.com/users/agola11/followers",
"following_url": "https://api.github.com/users/agola11/following{/other_user}",
"gists_url": "https://api.github.com/users/agola11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agola11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agola11/subscriptions",
"organizations_url": "https://api.github.com/users/agola11/orgs",
"repos_url": "https://api.github.com/users/agola11/repos",
"events_url": "https://api.github.com/users/agola11/events{/privacy}",
"received_events_url": "https://api.github.com/users/agola11/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-27T02:18:53 | 2023-06-06T02:18:44 | 2023-06-06T02:18:43 | COLLABORATOR | null | Add context manager to group all runs under a virtual parent | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5326/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5326",
"html_url": "https://github.com/langchain-ai/langchain/pull/5326",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5326.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5326.patch",
"merged_at": "2023-06-06T02:18:43"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5325 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5325/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5325/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5325/events | https://github.com/langchain-ai/langchain/pull/5325 | 1,728,399,484 | PR_kwDOIPDwls5Re0zl | 5,325 | Add pagination for Vertex AI embeddings | {
"login": "Jflick58",
"id": 22459070,
"node_id": "MDQ6VXNlcjIyNDU5MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/22459070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jflick58",
"html_url": "https://github.com/Jflick58",
"followers_url": "https://api.github.com/users/Jflick58/followers",
"following_url": "https://api.github.com/users/Jflick58/following{/other_user}",
"gists_url": "https://api.github.com/users/Jflick58/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jflick58/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jflick58/subscriptions",
"organizations_url": "https://api.github.com/users/Jflick58/orgs",
"repos_url": "https://api.github.com/users/Jflick58/repos",
"events_url": "https://api.github.com/users/Jflick58/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jflick58/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 0 | 2023-05-27T01:49:22 | 2023-05-29T13:57:41 | 2023-05-29T13:57:41 | CONTRIBUTOR | null | Fixes #5316
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5325/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5325",
"html_url": "https://github.com/langchain-ai/langchain/pull/5325",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5325.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5325.patch",
"merged_at": "2023-05-29T13:57:41"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5324 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5324/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5324/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5324/events | https://github.com/langchain-ai/langchain/issues/5324 | 1,728,392,677 | I_kwDOIPDwls5nBS3l | 5,324 | Issue: Fix or automate sync of releases to Discord Announcements channel, Twitter, etc. | {
"login": "uogbuji",
"id": 279982,
"node_id": "MDQ6VXNlcjI3OTk4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/279982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uogbuji",
"html_url": "https://github.com/uogbuji",
"followers_url": "https://api.github.com/users/uogbuji/followers",
"following_url": "https://api.github.com/users/uogbuji/following{/other_user}",
"gists_url": "https://api.github.com/users/uogbuji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uogbuji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uogbuji/subscriptions",
"organizations_url": "https://api.github.com/users/uogbuji/orgs",
"repos_url": "https://api.github.com/users/uogbuji/repos",
"events_url": "https://api.github.com/users/uogbuji/events{/privacy}",
"received_events_url": "https://api.github.com/users/uogbuji/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-27T01:24:20 | 2023-09-10T16:11:21 | 2023-09-10T16:11:21 | NONE | null | ### Issue you'd like to raise.
LC Release announcements seem to be missing from Discord's Announcements channel since 0.0.166. Looking more closely, these seem to be manual, added by hwchase17.
On Twitter, the most recent release announcement from the LangChainAI account is 0.0.170, viz:
https://twitter.com/search?q=(from%3ALangChainAI)%20release&src=typed_query&f=live
### Suggestion:
I couldn't tell from this project's various actions whether such postings are meant to be automated upon release (Github search on actions isn't great), and just need to be fixed. If not, I think it would be would be very useful for the community to add such release notification actions, so that the various places people keep up to date are all, well, up to date. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5324/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5323 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5323/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5323/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5323/events | https://github.com/langchain-ai/langchain/pull/5323 | 1,728,371,158 | PR_kwDOIPDwls5Reu4d | 5,323 | Fix typos | {
"login": "russellpwirtz",
"id": 987237,
"node_id": "MDQ6VXNlcjk4NzIzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/987237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/russellpwirtz",
"html_url": "https://github.com/russellpwirtz",
"followers_url": "https://api.github.com/users/russellpwirtz/followers",
"following_url": "https://api.github.com/users/russellpwirtz/following{/other_user}",
"gists_url": "https://api.github.com/users/russellpwirtz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/russellpwirtz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/russellpwirtz/subscriptions",
"organizations_url": "https://api.github.com/users/russellpwirtz/orgs",
"repos_url": "https://api.github.com/users/russellpwirtz/repos",
"events_url": "https://api.github.com/users/russellpwirtz/events{/privacy}",
"received_events_url": "https://api.github.com/users/russellpwirtz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 1 | 2023-05-27T00:32:25 | 2023-05-28T03:57:49 | 2023-05-27T01:55:22 | CONTRIBUTOR | null | # Documentation typo fixes
Fixes # (issue)
Simple typos in the blockchain .ipynb documentation
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5323/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5323",
"html_url": "https://github.com/langchain-ai/langchain/pull/5323",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5323.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5323.patch",
"merged_at": "2023-05-27T01:55:22"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5322 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5322/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5322/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5322/events | https://github.com/langchain-ai/langchain/issues/5322 | 1,728,349,391 | I_kwDOIPDwls5nBITP | 5,322 | Issues with Azure OpenAI | {
"login": "ushakrishnan",
"id": 18739265,
"node_id": "MDQ6VXNlcjE4NzM5MjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/18739265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ushakrishnan",
"html_url": "https://github.com/ushakrishnan",
"followers_url": "https://api.github.com/users/ushakrishnan/followers",
"following_url": "https://api.github.com/users/ushakrishnan/following{/other_user}",
"gists_url": "https://api.github.com/users/ushakrishnan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ushakrishnan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ushakrishnan/subscriptions",
"organizations_url": "https://api.github.com/users/ushakrishnan/orgs",
"repos_url": "https://api.github.com/users/ushakrishnan/repos",
"events_url": "https://api.github.com/users/ushakrishnan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ushakrishnan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-26T23:47:15 | 2023-05-31T23:40:55 | 2023-05-31T23:40:55 | NONE | null | ### System Info
OS = MACOS
langchain=0.0.179 (also tried 0.0.174 and 0.0.178)
### Who can help?
@hwchase17 @agola11
The full code below is single file. imports and other information not added to keep it crisp.
The following works with no issues:
```
llm = AzureOpenAI(openai_api_base=openai_api_base , model="text-davinci-003", engine="text-davinci-003", temperature=0.1, verbose=True, deployment_name="text-davinci-003", deployment_id="text-davinci-003", openai_api_key=openai_api_key)
resp = llm("Tell me pub joke")
print(resp)
```
The following does not work.
```
#get document store
store = getfromstore(collection_name="sou_coll")
# Create vectorstore info object - metadata repo?
vectorstore_info = VectorStoreInfo(
name="sou",
description="sou folder",
vectorstore=store
)
# Convert the document store into a langchain toolkit
toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)
# Add the toolkit to an end-to-end LC
agent_executor = create_vectorstore_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
response = agent_executor.run(prompt)
print(response)
```
I can confirm the document store exists and the same code with appropriate OpenAI (not Azure OpenAI) works as expected with no issue. Azure OpenAI gives the following error -
```
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request
raise error.InvalidRequestError(
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>
```
Observation LLM is correct since the first part (write a joke) works. The agent does not. Please help!
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
#!/usr/bin/env python3
import sys
from dotenv import load_dotenv
# Load default environment variables (.env)
load_dotenv()
# Import os to set API key
import os
# Import OpenAI as main LLM service
from langchain.llms import AzureOpenAI
from langchain.callbacks import get_openai_callback
# Bring in streamlit for UI/app interface
import streamlit as st
# Import PDF document loaders...there's other ones as well!
from langchain.document_loaders import PyPDFLoader
# Import chroma as the vector store
from langchain.vectorstores import Chroma
from common.funs import getfromstore
# Import vector store stuff
from langchain.agents.agent_toolkits import (
create_vectorstore_agent,
VectorStoreToolkit,
VectorStoreInfo
)
# Set this to `azure`
openai_api_type = os.environ["OPENAI_API_TYPE"] ="azure"
openai_api_version = os.environ["OPENAI_API_VERSION"] = os.environ["AOAI_OPENAI_API_VERSION"]
openai_api_base = os.environ["OPENAI_API_BASE"] = os.environ["AOAI_OPENAI_API_BASE"]
openai_api_key = os.environ["OPENAI_API_KEY"] = os.environ["AOAI_OPENAI_API_KEY"]
# Create instance of OpenAI LLM
#llm = AzureOpenAI(openai_api_base=openai_api_base , model="text-davinci-003", temperature=0.1, verbose=True, deployment_name="text-davinci-003", openai_api_key=openai_api_key)
llm = AzureOpenAI(openai_api_base=openai_api_base , model="text-davinci-003", engine="text-davinci-003", temperature=0.1, verbose=True, deployment_name="text-davinci-003", deployment_id="text-davinci-003", openai_api_key=openai_api_key)
resp = llm("Tell me pub joke")
print(resp)
print("------------")
st.write(resp)
st.write("----------------------")
#get document store
store = getfromstore(collection_name="sou_coll")
#print(store1.get(["metadatas"]))
# Create vectorstore info object - metadata repo?
vectorstore_info = VectorStoreInfo(
name="sou",
description="sou folder",
vectorstore=store
)
# Convert the document store into a langchain toolkit
toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)
# Add the toolkit to an end-to-end LC
agent_executor = create_vectorstore_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
st.title("🦜🔗🤗 What would you like to know?")
st.write("This sample uses Azure OpenAI")
# Create a text input box for the user
prompt = st.text_input('Input your prompt here:')
# If the user hits enter
if prompt:
with get_openai_callback() as cb:
#try:
# Then pass the prompt to the LLM
response = agent_executor.run(prompt)
# ...and write it out to the screen
st.write(response)
st.write(cb)
#except Exception as e:
# st.warning
# st.write("That was a difficult question! I choked on it!! Can you please try again with rephrasing it a bit?")
# st.write(cb)
# print(e)
# Find the relevant pages
search = store.similarity_search_with_score(prompt)
# Write out the first
try:
st.write("This information was found in:")
for doc in search:
score = doc[1]
try:
page_num = doc[0].metadata['page']
except:
page_num = "txt snippets"
source = doc[0].metadata['source']
# With a streamlit expander
with st.expander("Source: " + str(source) + " - Page: " + str(page_num) + "; Similarity Score: " + str(score) ):
st.write(doc[0].page_content)
except:
print("unable to get source document detail")
### Expected behavior
The video shows the expected output - https://www.youtube.com/watch?v=q27RbxcfGvE
The OpenAI code in this sample is exact except for changes to LLM and env variables - file https://github.com/ushakrishnan/SearchWithOpenAI/blob/main/pages/6_Q%26A_with_Open_AI.py.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5322/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5321 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5321/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5321/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5321/events | https://github.com/langchain-ai/langchain/pull/5321 | 1,728,341,437 | PR_kwDOIPDwls5Reo1o | 5,321 | Allow ElasticsearchEmbeddings to create a connection with ES Client object | {
"login": "jeffvestal",
"id": 53237856,
"node_id": "MDQ6VXNlcjUzMjM3ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/53237856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffvestal",
"html_url": "https://github.com/jeffvestal",
"followers_url": "https://api.github.com/users/jeffvestal/followers",
"following_url": "https://api.github.com/users/jeffvestal/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffvestal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffvestal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffvestal/subscriptions",
"organizations_url": "https://api.github.com/users/jeffvestal/orgs",
"repos_url": "https://api.github.com/users/jeffvestal/repos",
"events_url": "https://api.github.com/users/jeffvestal/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffvestal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 0 | 2023-05-26T23:28:51 | 2023-05-31T01:05:47 | 2023-05-31T00:26:31 | CONTRIBUTOR | null | This PR adds a new method `from_es_connection` to the `ElasticsearchEmbeddings` class allowing users to use Elasticsearch clusters outside of Elastic Cloud.
Users can create an Elasticsearch Client object and pass that to the new function.
The returned object is identical to the one returned by calling `from_credentials`
```
# Create Elasticsearch connection
es_connection = Elasticsearch(
hosts=['https://es_cluster_url:port'],
basic_auth=('user', 'password')
)
# Instantiate ElasticsearchEmbeddings using es_connection
embeddings = ElasticsearchEmbeddings.from_es_connection(
model_id,
es_connection,
)
```
I also added examples to the elasticsearch jupyter notebook
Fixes # https://github.com/hwchase17/langchain/issues/5239
cc: @hwchase17
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5321/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5321",
"html_url": "https://github.com/langchain-ai/langchain/pull/5321",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5321.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5321.patch",
"merged_at": "2023-05-31T00:26:31"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5320 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5320/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5320/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5320/events | https://github.com/langchain-ai/langchain/pull/5320 | 1,728,321,483 | PR_kwDOIPDwls5Rekkd | 5,320 | fix: remove empty lines that cause InvalidRequestError | {
"login": "mbchang",
"id": 6439365,
"node_id": "MDQ6VXNlcjY0MzkzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6439365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbchang",
"html_url": "https://github.com/mbchang",
"followers_url": "https://api.github.com/users/mbchang/followers",
"following_url": "https://api.github.com/users/mbchang/following{/other_user}",
"gists_url": "https://api.github.com/users/mbchang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbchang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbchang/subscriptions",
"organizations_url": "https://api.github.com/users/mbchang/orgs",
"repos_url": "https://api.github.com/users/mbchang/repos",
"events_url": "https://api.github.com/users/mbchang/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbchang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-26T22:54:23 | 2023-05-28T04:15:04 | 2023-05-28T04:15:04 | CONTRIBUTOR | null |
# remove empty lines in GenerativeAgentMemory that cause InvalidRequestError in OpenAIEmbeddings
<!--
Thank you for contributing to LangChain! Your PR will appear in our release under the title you set. Please make sure it highlights your valuable contribution.
Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change.
After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost.
-->
<!-- Remove if not applicable -->
Let's say the text given to `GenerativeAgent._parse_list` is
```
text = """
Insight 1: <insight 1>
Insight 2: <insight 2>
"""
```
This creates an `openai.error.InvalidRequestError: [''] is not valid under any of the given schemas - 'input'` because `GenerativeAgent.add_memory()` tries to add an empty string to the vectorstore.
This PR fixes the issue by removing the empty line between `Insight 1` and `Insight 2`
## Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests, lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @vowelparrot
VectorStores / Retrievers / Memory
- @dev2049
-->
@hwchase17
@vowelparrot
@dev2049
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5320/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5320",
"html_url": "https://github.com/langchain-ai/langchain/pull/5320",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5320.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5320.patch",
"merged_at": "2023-05-28T04:15:04"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5319 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5319/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5319/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5319/events | https://github.com/langchain-ai/langchain/pull/5319 | 1,728,316,763 | PR_kwDOIPDwls5Rejjw | 5,319 | Fixes and doc updates for DeepInfra integration. | {
"login": "yessenzhar",
"id": 8552242,
"node_id": "MDQ6VXNlcjg1NTIyNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8552242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yessenzhar",
"html_url": "https://github.com/yessenzhar",
"followers_url": "https://api.github.com/users/yessenzhar/followers",
"following_url": "https://api.github.com/users/yessenzhar/following{/other_user}",
"gists_url": "https://api.github.com/users/yessenzhar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yessenzhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yessenzhar/subscriptions",
"organizations_url": "https://api.github.com/users/yessenzhar/orgs",
"repos_url": "https://api.github.com/users/yessenzhar/repos",
"events_url": "https://api.github.com/users/yessenzhar/events{/privacy}",
"received_events_url": "https://api.github.com/users/yessenzhar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 0 | 2023-05-26T22:45:19 | 2023-05-29T13:56:33 | 2023-05-29T13:56:33 | CONTRIBUTOR | null | # Your PR Title (What it does)
<!--
Thank you for contributing to LangChain! Your PR will appear in our release under the title you set. Please make sure it highlights your valuable contribution.
Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change.
After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests, lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
- @hwchase17
- @agola11
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5319/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5319",
"html_url": "https://github.com/langchain-ai/langchain/pull/5319",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5319.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5319.patch",
"merged_at": "2023-05-29T13:56:33"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5318 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5318/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5318/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5318/events | https://github.com/langchain-ai/langchain/issues/5318 | 1,728,309,373 | I_kwDOIPDwls5nA-h9 | 5,318 | MultiRetrievalQAChain | {
"login": "SardarArslan",
"id": 82857336,
"node_id": "MDQ6VXNlcjgyODU3MzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/82857336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SardarArslan",
"html_url": "https://github.com/SardarArslan",
"followers_url": "https://api.github.com/users/SardarArslan/followers",
"following_url": "https://api.github.com/users/SardarArslan/following{/other_user}",
"gists_url": "https://api.github.com/users/SardarArslan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SardarArslan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SardarArslan/subscriptions",
"organizations_url": "https://api.github.com/users/SardarArslan/orgs",
"repos_url": "https://api.github.com/users/SardarArslan/repos",
"events_url": "https://api.github.com/users/SardarArslan/events{/privacy}",
"received_events_url": "https://api.github.com/users/SardarArslan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-26T22:32:19 | 2023-09-10T16:11:27 | 2023-09-10T16:11:26 | NONE | null | ### System Info
langchain='0.0.161'
python='3.9.13'
IPython= '7.31.1'
ipykernel='6.15.2'
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
retriever1 = Pinecone.from_documents(texts, embeddings,index_name='taxation').as_retriever()
retriever2 = Pinecone.from_documents(texts, embeddings,index_name='taxation').as_retriever()
retriever_infos = [
{
"name": "sindh",
"description": "Good for answering questions about Sindh",
"retriever": retriever1
},
{
"name": "punjab",
"description": "Good for answering questions about Punjab",
"retriever": retriever2
}]
chain = MultiRetrievalQAChain.from_retrievers(ChatOpenAI(model_name='gpt-3.5-turbo',temperature=0), retriever_infos,verbose=True)
chain.save('chain.json')
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_20160\230129054.py in <module>
----> 1 chain.save('chain.json')
~\anaconda3\lib\site-packages\langchain\chains\base.py in save(self, file_path)
294
295 # Fetch dictionary to save
--> 296 chain_dict = self.dict()
297
298 if save_path.suffix == ".json":
~\anaconda3\lib\site-packages\langchain\chains\base.py in dict(self, **kwargs)
269 if self.memory is not None:
270 raise ValueError("Saving of memory is not yet supported.")
--> 271 _dict = super().dict()
272 _dict["_type"] = self._chain_type
273 return _dict
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value()
~\anaconda3\lib\site-packages\langchain\chains\base.py in dict(self, **kwargs)
269 if self.memory is not None:
270 raise ValueError("Saving of memory is not yet supported.")
--> 271 _dict = super().dict()
272 _dict["_type"] = self._chain_type
273 return _dict
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value()
~\anaconda3\lib\site-packages\langchain\chains\base.py in dict(self, **kwargs)
269 if self.memory is not None:
270 raise ValueError("Saving of memory is not yet supported.")
--> 271 _dict = super().dict()
272 _dict["_type"] = self._chain_type
273 return _dict
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value()
~\anaconda3\lib\site-packages\langchain\prompts\base.py in dict(self, **kwargs)
186 def dict(self, **kwargs: Any) -> Dict:
187 """Return dictionary representation of prompt."""
--> 188 prompt_dict = super().dict(**kwargs)
189 prompt_dict["_type"] = self._prompt_type
190 return prompt_dict
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel.dict()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in _iter()
~\anaconda3\lib\site-packages\pydantic\main.cp39-win_amd64.pyd in pydantic.main.BaseModel._get_value()
~\anaconda3\lib\site-packages\langchain\schema.py in dict(self, **kwargs)
354 """Return dictionary representation of output parser."""
355 output_parser_dict = super().dict()
--> 356 output_parser_dict["_type"] = self._type
357 return output_parser_dict
358
~\anaconda3\lib\site-packages\langchain\schema.py in _type(self)
349 def _type(self) -> str:
350 """Return the type key."""
--> 351 raise NotImplementedError
352
353 def dict(self, **kwargs: Any) -> Dict:
NotImplementedError:
### Expected behavior
I expected to save the chain on disk for future use. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5318/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5318/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5317 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5317/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5317/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5317/events | https://github.com/langchain-ai/langchain/issues/5317 | 1,728,248,298 | I_kwDOIPDwls5nAvnq | 5,317 | TypeError: a coroutine was expected, got {'question': query, 'chat_history': {...}} | {
"login": "talhaanwarch",
"id": 37379131,
"node_id": "MDQ6VXNlcjM3Mzc5MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/37379131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talhaanwarch",
"html_url": "https://github.com/talhaanwarch",
"followers_url": "https://api.github.com/users/talhaanwarch/followers",
"following_url": "https://api.github.com/users/talhaanwarch/following{/other_user}",
"gists_url": "https://api.github.com/users/talhaanwarch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talhaanwarch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talhaanwarch/subscriptions",
"organizations_url": "https://api.github.com/users/talhaanwarch/orgs",
"repos_url": "https://api.github.com/users/talhaanwarch/repos",
"events_url": "https://api.github.com/users/talhaanwarch/events{/privacy}",
"received_events_url": "https://api.github.com/users/talhaanwarch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-26T21:09:00 | 2023-09-25T10:16:10 | 2023-09-25T10:16:10 | NONE | null | ### System Info
langchain 0.0.181
Python 3.10
OS: Ubuntu
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
```
import asyncio
from functools import lru_cache
from typing import AsyncGenerator
from langchain.text_splitter import RecursiveCharacterTextSplitter
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from langchain.callbacks import AsyncIteratorCallbackHandler
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from pydantic import BaseModel
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
```
```
api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
app = FastAPI()
```
```
with open('state_of_the_union.txt') as f:
state_of_the_union = f.read()
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 100,
chunk_overlap = 20,
length_function = len,
)
doc_text = text_splitter.create_documents([state_of_the_union])
```
```
embeddings = OpenAIEmbeddings(openai_api_key=api_key)
vector_db = Chroma.from_documents(doc_text, embeddings,persist_directory='db')
retriever = vector_db.as_retriever()
```
```
class ChatRequest(BaseModel):
"""Request model for chat requests.
Includes the conversation ID and the message from the user.
"""
conversation_id: str
message: str
```
```
class StreamingConversationChain:
"""
Class for handling streaming conversation chains.
It creates and stores memory for each conversation,
and generates responses using the ChatOpenAI model from LangChain.
"""
def __init__(self, openai_api_key: str, temperature: float = 0.0):
self.memories = {}
self.openai_api_key = openai_api_key
self.temperature = temperature
async def generate_response(
self, conversation_id: str, message: str
) -> AsyncGenerator[str, None]:
"""
Asynchronous function to generate a response for a conversation.
It creates a new conversation chain for each message and uses a
callback handler to stream responses as they're generated.
:param conversation_id: The ID of the conversation.
:param message: The message from the user.
"""
callback_handler = AsyncIteratorCallbackHandler()
llm = ChatOpenAI(
callbacks=[callback_handler],
streaming=True,
temperature=self.temperature,
openai_api_key=self.openai_api_key,
)
memory = self.memories.get(conversation_id)
if memory is None:
memory = ConversationBufferMemory(memory_key="chat_history",output_key='answer',
return_messages=True)
self.memories[conversation_id] = memory
chain = ConversationalRetrievalChain.from_llm(llm,
retriever=retriever, memory=memory,
chain_type="stuff",
# return_source_documents=True
)
run = asyncio.create_task(chain(({"question": message})))
async for token in callback_handler.aiter():
yield token
await run()
```
```
streaming_conversation_chain = StreamingConversationChain(
openai_api_key=api_key
)
```
```
@app.post("/chat", response_class=StreamingResponse)
async def generate_response(data: ChatRequest) -> StreamingResponse:
"""Endpoint for chat requests.
It uses the StreamingConversationChain instance to generate responses,
and then sends these responses as a streaming response.
:param data: The request data.
"""
return StreamingResponse(
streaming_conversation_chain.generate_response(
data.conversation_id, data.message
),
media_type="text/event-stream",
)
```
```
if __name__ == "__main__":
import uvicorn
uvicorn.run(app)
```
Here is error traceback
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/talha/venv/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 435, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/talha/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__
await super().__call__(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/talha/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/talha/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/talha/venv/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/routing.py", line 69, in app
await response(scope, receive, send)
File "/home/talha/venv/lib/python3.10/site-packages/starlette/responses.py", line 270, in __call__
async with anyio.create_task_group() as task_group:
File "/home/talha/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__
raise exceptions[0]
File "/home/talha/venv/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap
await func()
File "/home/talha/venv/lib/python3.10/site-packages/starlette/responses.py", line 262, in stream_response
async for chunk in self.body_iterator:
File "/media/talha/data/nlp/langchain/fastapi/error_rep.py", line 93, in generate_response
run = asyncio.create_task(chain(({"question": message})))
File "/usr/lib/python3.10/asyncio/tasks.py", line 337, in create_task
task = loop.create_task(coro)
File "uvloop/loop.pyx", line 1435, in uvloop.loop.Loop.create_task
TypeError: a coroutine was expected, got {'question': 'what is cnn', 'chat_history': [HumanMessage(content='what is cnn', additional_kwargs={}, example=False), AIMessage(content='CNN (Cable News Network) is a news-based cable television channel and website that provides 24-hour news coverage, analysis, and commentary on current events happening around the world.', additional_kwargs={}, example=False)], 'answer': 'CNN (Cable News Network) is a news-based cable television channel and website that provides 24-hour news coverage, analysis, and commentary on current events happening around the world.'}
### Expected behavior
This code worked with `ConversationChain` and produce streaming output
```
chain = ConversationChain(
memory=memory,
prompt=CHAT_PROMPT_TEMPLATE,
llm=llm,
)
run = asyncio.create_task(chain.arun(input=message))
```
But i want to use ConversationalRetrievalChain | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5317/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5316 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5316/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5316/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5316/events | https://github.com/langchain-ai/langchain/issues/5316 | 1,728,211,849 | I_kwDOIPDwls5nAmuJ | 5,316 | VertexAIEmbeddings error when passing a list with of length greater than 5. | {
"login": "agvc",
"id": 15971370,
"node_id": "MDQ6VXNlcjE1OTcxMzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/15971370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agvc",
"html_url": "https://github.com/agvc",
"followers_url": "https://api.github.com/users/agvc/followers",
"following_url": "https://api.github.com/users/agvc/following{/other_user}",
"gists_url": "https://api.github.com/users/agvc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agvc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agvc/subscriptions",
"organizations_url": "https://api.github.com/users/agvc/orgs",
"repos_url": "https://api.github.com/users/agvc/repos",
"events_url": "https://api.github.com/users/agvc/events{/privacy}",
"received_events_url": "https://api.github.com/users/agvc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-26T20:31:56 | 2023-05-29T13:57:42 | 2023-05-29T13:57:42 | NONE | null | ### System Info
google-cloud-aiplatform==1.25.0
langchain==0.0.181
python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Any list with len > 5 will cause an error.
```python
from langchain.vectorstores import FAISS
from langchain.embeddings import VertexAIEmbeddings
text = ['text_1', 'text_2', 'text_3', 'text_4', 'text_5', 'text_6']
embeddings = VertexAIEmbeddings()
vectorstore = FAISS.from_texts(text, embeddings)
```
```python
InvalidArgument Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py](https://localhost:8080/#) in error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
75
76 return error_remapped_callable
InvalidArgument: 400 5 instance(s) is allowed per prediction. Actual: 6
```
### Expected behavior
Excepted to successfully be able to vectorize a larger list of items. Maybe implement a step to | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5316/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5316/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5315 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5315/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5315/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5315/events | https://github.com/langchain-ai/langchain/pull/5315 | 1,728,163,644 | PR_kwDOIPDwls5ReCpi | 5,315 | minor refactor GenerativeAgentMemory | {
"login": "mbchang",
"id": 6439365,
"node_id": "MDQ6VXNlcjY0MzkzNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6439365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbchang",
"html_url": "https://github.com/mbchang",
"followers_url": "https://api.github.com/users/mbchang/followers",
"following_url": "https://api.github.com/users/mbchang/following{/other_user}",
"gists_url": "https://api.github.com/users/mbchang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbchang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbchang/subscriptions",
"organizations_url": "https://api.github.com/users/mbchang/orgs",
"repos_url": "https://api.github.com/users/mbchang/repos",
"events_url": "https://api.github.com/users/mbchang/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbchang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "vowelparrot",
"id": 130414180,
"node_id": "U_kgDOB8X2ZA",
"avatar_url": "https://avatars.githubusercontent.com/u/130414180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vowelparrot",
"html_url": "https://github.com/vowelparrot",
"followers_url": "https://api.github.com/users/vowelparrot/followers",
"following_url": "https://api.github.com/users/vowelparrot/following{/other_user}",
"gists_url": "https://api.github.com/users/vowelparrot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vowelparrot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vowelparrot/subscriptions",
"organizations_url": "https://api.github.com/users/vowelparrot/orgs",
"repos_url": "https://api.github.com/users/vowelparrot/repos",
"events_url": "https://api.github.com/users/vowelparrot/events{/privacy}",
"received_events_url": "https://api.github.com/users/vowelparrot/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "vowelparrot",
"id": 130414180,
"node_id": "U_kgDOB8X2ZA",
"avatar_url": "https://avatars.githubusercontent.com/u/130414180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vowelparrot",
"html_url": "https://github.com/vowelparrot",
"followers_url": "https://api.github.com/users/vowelparrot/followers",
"following_url": "https://api.github.com/users/vowelparrot/following{/other_user}",
"gists_url": "https://api.github.com/users/vowelparrot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vowelparrot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vowelparrot/subscriptions",
"organizations_url": "https://api.github.com/users/vowelparrot/orgs",
"repos_url": "https://api.github.com/users/vowelparrot/repos",
"events_url": "https://api.github.com/users/vowelparrot/events{/privacy}",
"received_events_url": "https://api.github.com/users/vowelparrot/received_events",
"type": "User",
"site_admin": false
}
] | null | 1 | 2023-05-26T19:57:58 | 2023-06-03T21:53:14 | 2023-06-03T21:53:14 | CONTRIBUTOR | null | # minor refactor of GenerativeAgentMemory
<!--
Thank you for contributing to LangChain! Your PR will appear in our release under the title you set. Please make sure it highlights your valuable contribution.
Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change.
After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost.
-->
<!-- Remove if not applicable -->
- refactor `format_memories_detail` to be more reusable
- modified prompts for getting topics for reflection and for generating insights
- update `characters.ipynb` to reflect changes
## Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests, lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @vowelparrot
VectorStores / Retrievers / Memory
- @dev2049
-->
@vowelparrot
@hwchase17
@dev2049
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5315/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5315",
"html_url": "https://github.com/langchain-ai/langchain/pull/5315",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5315.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5315.patch",
"merged_at": "2023-06-03T21:53:14"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5314 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5314/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5314/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5314/events | https://github.com/langchain-ai/langchain/pull/5314 | 1,728,147,636 | PR_kwDOIPDwls5Rd_Lb | 5,314 | Add Chainlit to deployment options | {
"login": "constantinidan",
"id": 16107237,
"node_id": "MDQ6VXNlcjE2MTA3MjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/16107237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/constantinidan",
"html_url": "https://github.com/constantinidan",
"followers_url": "https://api.github.com/users/constantinidan/followers",
"following_url": "https://api.github.com/users/constantinidan/following{/other_user}",
"gists_url": "https://api.github.com/users/constantinidan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/constantinidan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/constantinidan/subscriptions",
"organizations_url": "https://api.github.com/users/constantinidan/orgs",
"repos_url": "https://api.github.com/users/constantinidan/repos",
"events_url": "https://api.github.com/users/constantinidan/events{/privacy}",
"received_events_url": "https://api.github.com/users/constantinidan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-26T19:44:07 | 2023-05-28T04:12:54 | 2023-05-28T04:12:53 | CONTRIBUTOR | null | # Add Chainlit to deployment options
Add [Chainlit](https://github.com/Chainlit/chainlit) as deployment options
Used links to Github examples and Chainlit doc on the LangChain integration | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5314/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5314",
"html_url": "https://github.com/langchain-ai/langchain/pull/5314",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5314.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5314.patch",
"merged_at": "2023-05-28T04:12:53"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5313 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5313/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5313/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5313/events | https://github.com/langchain-ai/langchain/issues/5313 | 1,728,123,134 | I_kwDOIPDwls5nARD- | 5,313 | Converting ChatOpenAI model to ONNX format | {
"login": "naila03",
"id": 104375200,
"node_id": "U_kgDOBjijoA",
"avatar_url": "https://avatars.githubusercontent.com/u/104375200?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naila03",
"html_url": "https://github.com/naila03",
"followers_url": "https://api.github.com/users/naila03/followers",
"following_url": "https://api.github.com/users/naila03/following{/other_user}",
"gists_url": "https://api.github.com/users/naila03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naila03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naila03/subscriptions",
"organizations_url": "https://api.github.com/users/naila03/orgs",
"repos_url": "https://api.github.com/users/naila03/repos",
"events_url": "https://api.github.com/users/naila03/events{/privacy}",
"received_events_url": "https://api.github.com/users/naila03/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-26T19:19:53 | 2023-09-27T16:06:50 | 2023-09-27T16:06:49 | NONE | null | ### Feature request
Is there any way to convert ChatOpenAI models to ONNX format? I've noticed that other models can be converted to ONNX (example: https://github.com/openai/whisper/discussions/134) and I was wondering if similar logic could be applied in this case as well.
### Motivation
I want to save these models in the ONNX format (a single file) so I can easily retrieve them and use them for question-answering. I want to be able to save the model as a single file in this case.
### Your contribution
Not sure. I could create a PR if I'm able to succeed in this. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5313/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5312 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5312/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5312/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5312/events | https://github.com/langchain-ai/langchain/pull/5312 | 1,728,082,343 | PR_kwDOIPDwls5RdxAr | 5,312 | Update Prediction Guard LLM wrapper | {
"login": "dwhitena",
"id": 4524535,
"node_id": "MDQ6VXNlcjQ1MjQ1MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4524535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwhitena",
"html_url": "https://github.com/dwhitena",
"followers_url": "https://api.github.com/users/dwhitena/followers",
"following_url": "https://api.github.com/users/dwhitena/following{/other_user}",
"gists_url": "https://api.github.com/users/dwhitena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwhitena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwhitena/subscriptions",
"organizations_url": "https://api.github.com/users/dwhitena/orgs",
"repos_url": "https://api.github.com/users/dwhitena/repos",
"events_url": "https://api.github.com/users/dwhitena/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwhitena/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-26T18:44:05 | 2023-05-29T14:03:38 | 2023-05-29T14:03:38 | CONTRIBUTOR | null | # Update Prediction Guard LLM wrapper to the latest version/ functionality
No dependencies updates here, but updating the LLM wrapper for [Prediction Guard](https://www.predictionguard.com/) to the latest version of the Python client, which includes additional functionality. Specifically the new version includes functionality to:
- control/ structure the output of LLMs
- access the latest open access LLMs (e.g., MPT 7B Instruct) with an OpenAI like API
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @hwchase17 or @vowelparrot (as they reviewed the original integration PR for Prediction Guard).
Thanks in advance!
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5312/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5312",
"html_url": "https://github.com/langchain-ai/langchain/pull/5312",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5312.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5312.patch",
"merged_at": "2023-05-29T14:03:38"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5311 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5311/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5311/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5311/events | https://github.com/langchain-ai/langchain/pull/5311 | 1,728,004,073 | PR_kwDOIPDwls5Rdf1Q | 5,311 | docs: added link to LangChain Handbook | {
"login": "leo-gan",
"id": 2256422,
"node_id": "MDQ6VXNlcjIyNTY0MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2256422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leo-gan",
"html_url": "https://github.com/leo-gan",
"followers_url": "https://api.github.com/users/leo-gan/followers",
"following_url": "https://api.github.com/users/leo-gan/following{/other_user}",
"gists_url": "https://api.github.com/users/leo-gan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leo-gan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leo-gan/subscriptions",
"organizations_url": "https://api.github.com/users/leo-gan/orgs",
"repos_url": "https://api.github.com/users/leo-gan/repos",
"events_url": "https://api.github.com/users/leo-gan/events{/privacy}",
"received_events_url": "https://api.github.com/users/leo-gan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-26T17:40:12 | 2023-05-28T18:28:09 | 2023-05-28T03:57:41 | COLLABORATOR | null | # added a link to LangChain Handbook
## Who can review?
Community members can review the PR once tests pass.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5311/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5311",
"html_url": "https://github.com/langchain-ai/langchain/pull/5311",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5311.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5311.patch",
"merged_at": "2023-05-28T03:57:40"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5310 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5310/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5310/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5310/events | https://github.com/langchain-ai/langchain/pull/5310 | 1,727,986,842 | PR_kwDOIPDwls5RdcHt | 5,310 | Add Spark DataFrame as a Document Loader | {
"login": "rithwik-db",
"id": 81988348,
"node_id": "MDQ6VXNlcjgxOTg4MzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/81988348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rithwik-db",
"html_url": "https://github.com/rithwik-db",
"followers_url": "https://api.github.com/users/rithwik-db/followers",
"following_url": "https://api.github.com/users/rithwik-db/following{/other_user}",
"gists_url": "https://api.github.com/users/rithwik-db/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rithwik-db/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rithwik-db/subscriptions",
"organizations_url": "https://api.github.com/users/rithwik-db/orgs",
"repos_url": "https://api.github.com/users/rithwik-db/repos",
"events_url": "https://api.github.com/users/rithwik-db/events{/privacy}",
"received_events_url": "https://api.github.com/users/rithwik-db/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-26T17:23:37 | 2023-05-29T14:10:27 | 2023-05-29T14:10:26 | CONTRIBUTOR | null | # Add Spark DataFrame as a Document Loader
This is currently a work in progress PR on adding Spark DataFrames as a Document Loader **(tests haven't been added yet**). Langchain already has a Pandas DF loader and so extended support for Spark seemed to be the next step. The core issue is that Spark DataFrames are usually not just stored on one worker, so instead of doing a major code change to allow for `yield` functionality with Document Loaders, I simply just checked how much memory is available and set the max size for this DocumentLoader list as a certain fraction of it. It is currently set to 1/2 but it should be set to something like 1/10 or 1/20 for regular usage. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5310/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5310",
"html_url": "https://github.com/langchain-ai/langchain/pull/5310",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5310.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5310.patch",
"merged_at": "2023-05-29T14:10:26"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5309 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5309/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5309/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5309/events | https://github.com/langchain-ai/langchain/pull/5309 | 1,727,972,765 | PR_kwDOIPDwls5RdZEy | 5,309 | docs: improve flow of llm caching notebook | {
"login": "malandis",
"id": 3690240,
"node_id": "MDQ6VXNlcjM2OTAyNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3690240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/malandis",
"html_url": "https://github.com/malandis",
"followers_url": "https://api.github.com/users/malandis/followers",
"following_url": "https://api.github.com/users/malandis/following{/other_user}",
"gists_url": "https://api.github.com/users/malandis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/malandis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/malandis/subscriptions",
"organizations_url": "https://api.github.com/users/malandis/orgs",
"repos_url": "https://api.github.com/users/malandis/repos",
"events_url": "https://api.github.com/users/malandis/events{/privacy}",
"received_events_url": "https://api.github.com/users/malandis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-26T17:11:33 | 2023-05-26T17:34:12 | 2023-05-26T17:34:12 | CONTRIBUTOR | null | # docs: improve flow of llm caching notebook
The notebook `llm_caching` demos various caching providers. In the previous version, there was setup common to all examples but under the `In Memory Caching` heading.
If a user comes and only wants to try a particular example, they will run the common setup, then the cells for the specific provider they are interested in. Then they will get import and variable reference errors. This commit moves the common setup to the top to avoid this.
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
@dev2049 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5309/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5309",
"html_url": "https://github.com/langchain-ai/langchain/pull/5309",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5309.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5309.patch",
"merged_at": "2023-05-26T17:34:12"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5308 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5308/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5308/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5308/events | https://github.com/langchain-ai/langchain/issues/5308 | 1,727,924,219 | I_kwDOIPDwls5m_gf7 | 5,308 | Issue: All types of langchain memories don't work in a proper way. | {
"login": "Eng-ZeyadTarek",
"id": 61242145,
"node_id": "MDQ6VXNlcjYxMjQyMTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/61242145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eng-ZeyadTarek",
"html_url": "https://github.com/Eng-ZeyadTarek",
"followers_url": "https://api.github.com/users/Eng-ZeyadTarek/followers",
"following_url": "https://api.github.com/users/Eng-ZeyadTarek/following{/other_user}",
"gists_url": "https://api.github.com/users/Eng-ZeyadTarek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eng-ZeyadTarek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eng-ZeyadTarek/subscriptions",
"organizations_url": "https://api.github.com/users/Eng-ZeyadTarek/orgs",
"repos_url": "https://api.github.com/users/Eng-ZeyadTarek/repos",
"events_url": "https://api.github.com/users/Eng-ZeyadTarek/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eng-ZeyadTarek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-26T16:33:34 | 2023-09-18T16:10:41 | 2023-09-18T16:10:40 | NONE | null | I've built many langchain agents with many types of memory to remember the context of the context in a correct way.
The problem is if I send the same message again, he will generate the same thought and answer again, which I don't refer to.
Example of the incorrect conversation that would happen between me and the agent when using **GPTIndexChatMemory**
**User**:- Hi
**AI**:- Hi, how can I help you today?
**User**: What's machine learning?
**AI**:- Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and make predictions or decisions without being explicitly programmed. The primary goal of machine learning is to enable computers to automatically learn and improve from experience or examples. **(good answer referring to my previous question)**
**User**:- I didn't get it.
**AI**:- I apologize if my previous response was not clear. Let me provide a simpler explanation: Machine learning is a branch of artificial intelligence (AI) that focuses on teaching computers to learn and make decisions or predictions without being explicitly programmed for each specific task. **(good answer referring to my previous message and he knew that I didn't the definition of ML)**
**User**:- what's the physics?
**AI**:- Physics is a natural science that studies the fundamental principles and laws governing the physical world. It seeks to understand the behavior of matter, energy, and the interactions between them. **(good answer referring to my previous message)**
**User**:- I didn't get it. **(same message again which saved in the previous chat history)**
**AI**:- I apologize if my previous response was not clear. Let me provide a simpler explanation: Machine learning is a branch of artificial intelligence (AI) that focuses on teaching computers to learn and make decisions or predictions without being explicitly programmed for each specific task. **(not the correct answer, I'm referring to the definition of physics not the definition of ML)**
**User**: What's machine learning?
** AI **:- Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and make predictions or decisions without being explicitly programmed. The primary goal of machine learning is to enable computers to automatically learn and improve from experience or examples. **(He generated the same answer from the previous chat history while I need him to give more information to the user not just the same previous answer)**
Example of the correct conversation I want to accomplish when using **GPTIndexChatMemory**
**User**:- Hi
**AI**:- Hi, how can I help you today?
**User**: What's machine learning?
**AI**:- Machine learning is a field of artificial intelligence (AI) that focuses on developing algorithms and models that enable computers to learn from data and make predictions or decisions without being explicitly programmed. The primary goal of machine learning is to enable computers to automatically learn and improve from experience or examples. **(good answer referring to my previous question)**
**User**:- I didn't get it.
**AI**:- I apologize if my previous response was not clear. Let me provide a simpler explanation: Machine learning is a branch of artificial intelligence (AI) that focuses on teaching computers to learn and make decisions or predictions without being explicitly programmed for each specific task. **(good answer referring to my previous message and he knew that I didn't the definition of ML)**
**User**:- what's the physics?
**AI**:- Physics is a natural science that studies the fundamental principles and laws governing the physical world. It seeks to understand the behavior of matter, energy, and the interactions between them. **(good answer referring to my previous message)**
**User**:- I didn't get it.
**AI**:- I apologize for any confusion caused. Let me provide a more simplified explanation of physics: Physics is the branch of science that deals with understanding and explaining how the world around us works. It is concerned with studying matter (everything that makes up the physical world) and energy (the ability to do work). **(now it's good because he knows that I'm referring to the definition of physics not the definition of ML, although the "I didn't get it." message was saved in the previous chat history.**
**User**: What's machine learning?
**AI**:- Machine learning is a field of artificial intelligence that focuses on developing algorithms and models capable of learning from data and making predictions or decisions. The primary idea behind machine learning is to enable computers to learn and improve automatically without explicit programming. **(better answer although I repeated the same question, but he didn't get the same answer from the previous chat history)**
I know that the problem with the memory because if I build my agent with **ConversationBufferWindowMemory** with k = he 1 will perform this type of conversation, but since I'm using **GPTIndexChatMemory** he saved all the messages and the questions and the answers of the full conversation in this memory and bring the same answer from the previous chat history if the **message/question** repeated, which is totally wrong.
This is my prompt to instruct my agent and its **CONVERSATIONAL_REACT_DESCRIPTION**
"""
SMSM bot, your main objective is to provide the most helpful and accurate responses to the user Zeyad. To do this, you have a powerful toolset and the ability to learn and adapt to the conversation's context
GOAL: The priority is to keep the conversation flowing smoothly. Offer new insights, avoid repetitive responses, and refrain from chat history without considering the most recent context. Always place emphasis on the most recent question or topic raised by the user, and tailor your responses to match his inquiries.
Consider the following scenarios:
**Scenario 1**: Whenever the user introduces a new topic, all his subsequent messages are assumed to refer to this latest topic, even if this message/question already exists in the previous chat history as it is in previous conversations under different topics. This context remains until the user changes the topic explicitly. Do not seek clarification on the topic unless the user's message is ambiguous within the context of the latest topic, For example, if the user asked about Machine Learning and then about Physics, and subsequently said, "I didn't get it," your responsibility is to provide further explanation about Physics (the latest topic), and not Machine Learning (the previous topic) or ask which topic he's referring to. The phrase "I didn't get it" must be associated with the most recent topic discussed.
**Scenario 2:** If the user asks the same question or a general knowledge question that has been asked before and you answered it, don't just repeat the previous answer verbatim or without relying on the previous chat history answer. Instead, try to add more value, provide a different perspective, or delve deeper into the topic and aim to generate a better and different answer that provides additional value.
You MUST use the following format to provide the answer to the user:
**Thought**: I have to see what the current topic we are currently discussing with the user based on the current topic, deeply analyze the user's message, find out his intention, and see if the user refers to the current topic or not regardless of previous chat history and with regarding (Scenario 1, GOAL)
**AI**: [your response here]
Begin!
Prvious chat history:
{chat_history}
New input: {input}
"""
That's the way I define the agent and my memory.
`embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
service_context = ServiceContext.from_defaults(embed_model=embed_model)
index = GPTListIndex([],service_context=service_context)
from llama_index.query_engine import RetrieverQueryEngine
#retriever = index.as_retriever(retriever_mode='embedding')
#query_engine = RetrieverQueryEngine(retriever)
memory = GPTIndexChatMemory(
index=index,
memory_key="chat_history",
query_kwargs={"response_mode": "compact"},
input_key="input",
)
agent_chain = initialize_agent(
tools,
ChatOpenAI(temperature=0),
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION ,
verbose=True,
handle_parsing_errors=True,
memory = memory
)` | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5308/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5307 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5307/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5307/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5307/events | https://github.com/langchain-ai/langchain/issues/5307 | 1,727,914,846 | I_kwDOIPDwls5m_eNe | 5,307 | Add a langchain.embeddings.AnthropicEmbeddings class | {
"login": "mirodrr",
"id": 113373534,
"node_id": "U_kgDOBsHxXg",
"avatar_url": "https://avatars.githubusercontent.com/u/113373534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mirodrr",
"html_url": "https://github.com/mirodrr",
"followers_url": "https://api.github.com/users/mirodrr/followers",
"following_url": "https://api.github.com/users/mirodrr/following{/other_user}",
"gists_url": "https://api.github.com/users/mirodrr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mirodrr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mirodrr/subscriptions",
"organizations_url": "https://api.github.com/users/mirodrr/orgs",
"repos_url": "https://api.github.com/users/mirodrr/repos",
"events_url": "https://api.github.com/users/mirodrr/events{/privacy}",
"received_events_url": "https://api.github.com/users/mirodrr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-26T16:25:57 | 2023-05-26T17:02:37 | 2023-05-26T17:02:37 | NONE | null | ### Feature request
Add a langchain.embeddings.AnthropicEmbeddings class, similar to the langchain.embeddings.OpenAIEmbeddings class
### Motivation
I am trying to modify this notebook to use Claude by Anthropic instead of OpenAI: https://github.com/pinecone-io/examples/blob/master/generation/langchain/handbook/05-langchain-retrieval-augmentation.ipynb
This notebook uses Pinecone and an OpenAI LLM to do retrieval augmentation, but I would like to use Claude by Anthropic
However, I am stuck because of the lack of a corresponding langchain.embeddings.AnthropicEmbeddings to replace the langchain.embeddings.OpenAIEmbeddings class that is used in this example
### Your contribution
I am willing to contribute, but would appreciate some guidance. I am very new to this project | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5307/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5307/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5306 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5306/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5306/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5306/events | https://github.com/langchain-ai/langchain/pull/5306 | 1,727,901,493 | PR_kwDOIPDwls5RdJkB | 5,306 | Fix: Handle empty documents in ContextualCompressionRetriever (Issue #5304) | {
"login": "hanguofeng",
"id": 18056,
"node_id": "MDQ6VXNlcjE4MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/18056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hanguofeng",
"html_url": "https://github.com/hanguofeng",
"followers_url": "https://api.github.com/users/hanguofeng/followers",
"following_url": "https://api.github.com/users/hanguofeng/following{/other_user}",
"gists_url": "https://api.github.com/users/hanguofeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hanguofeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanguofeng/subscriptions",
"organizations_url": "https://api.github.com/users/hanguofeng/orgs",
"repos_url": "https://api.github.com/users/hanguofeng/repos",
"events_url": "https://api.github.com/users/hanguofeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/hanguofeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-26T16:16:31 | 2023-05-29T15:13:09 | 2023-05-28T20:19:34 | CONTRIBUTOR | null | # Fix: Handle empty documents in ContextualCompressionRetriever (Issue #5304)
Fixes #5304
Prevent cohere.error.CohereAPIError caused by an empty list of documents by adding a condition to check if the input documents list is empty in the compress_documents method. If the list is empty, return an empty list immediately, avoiding the error and unnecessary processing.
@dev2049 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5306/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5306",
"html_url": "https://github.com/langchain-ai/langchain/pull/5306",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5306.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5306.patch",
"merged_at": "2023-05-28T20:19:34"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5305 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5305/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5305/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5305/events | https://github.com/langchain-ai/langchain/pull/5305 | 1,727,894,424 | PR_kwDOIPDwls5RdIAR | 5,305 | Add SKLearnVectorStore | {
"login": "mrtj",
"id": 3469711,
"node_id": "MDQ6VXNlcjM0Njk3MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3469711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrtj",
"html_url": "https://github.com/mrtj",
"followers_url": "https://api.github.com/users/mrtj/followers",
"following_url": "https://api.github.com/users/mrtj/following{/other_user}",
"gists_url": "https://api.github.com/users/mrtj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrtj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrtj/subscriptions",
"organizations_url": "https://api.github.com/users/mrtj/orgs",
"repos_url": "https://api.github.com/users/mrtj/repos",
"events_url": "https://api.github.com/users/mrtj/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrtj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4678528817,
"node_id": "LA_kwDOIPDwls8AAAABFtyvMQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
}
] | closed | false | null | [] | null | 1 | 2023-05-26T16:11:13 | 2023-05-28T15:17:42 | 2023-05-28T15:17:42 | CONTRIBUTOR | null | # Add SKLearnVectorStore
This PR adds SKLearnVectorStore, a simply vector store based on NearestNeighbors implementations in the scikit-learn package. This provides a simple drop-in vector store implementation with minimal dependencies (scikit-learn is typically installed in a data scientist / ml engineer environment). The vector store can be persisted and loaded from json, bson and parquet format.
SKLearnVectorStore has soft (dynamic) dependency on the scikit-learn, numpy and pandas packages. Persisting to bson requires the bson package, persisting to parquet requires the pyarrow package.
## Before submitting
Integration tests are provided under `tests/integration_tests/vectorstores/test_sklearn.py`
Sample usage notebook is provided under `docs/modules/indexes/vectorstores/examples/sklear.ipynb`
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
@hwchase17 - project lead
VectorStores / Retrievers / Memory
@dev2049
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5305/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5305/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5305",
"html_url": "https://github.com/langchain-ai/langchain/pull/5305",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5305.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5305.patch",
"merged_at": "2023-05-28T15:17:42"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5304 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5304/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5304/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5304/events | https://github.com/langchain-ai/langchain/issues/5304 | 1,727,893,507 | I_kwDOIPDwls5m_ZAD | 5,304 | CohereAPIError thrown when base retriever returns empty documents in ContextualCompressionRetriever using Cohere Rank | {
"login": "hanguofeng",
"id": 18056,
"node_id": "MDQ6VXNlcjE4MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/18056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hanguofeng",
"html_url": "https://github.com/hanguofeng",
"followers_url": "https://api.github.com/users/hanguofeng/followers",
"following_url": "https://api.github.com/users/hanguofeng/following{/other_user}",
"gists_url": "https://api.github.com/users/hanguofeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hanguofeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanguofeng/subscriptions",
"organizations_url": "https://api.github.com/users/hanguofeng/orgs",
"repos_url": "https://api.github.com/users/hanguofeng/repos",
"events_url": "https://api.github.com/users/hanguofeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/hanguofeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-26T16:10:47 | 2023-05-28T20:19:35 | 2023-05-28T20:19:35 | CONTRIBUTOR | null | ### System Info
- 5.19.0-42-generic # 43~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Apr 21 16:51:08 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
- langchain==0.0.180
- Python 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Set up a retriever using any type of retriever (for example, I used Pinecone).
2. Pass it into the ContextualCompressionRetriever.
3. If the base retriever returns empty documents,
4. It throws an error: **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/contextual_compression.py", line 37, in get_relevant_documents
> compressed_docs = self.base_compressor.compress_documents(docs, query)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py", line 57, in compress_documents
> results = self.client.rerank(
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 633, in rerank
> reranking = Reranking(self._request(cohere.RERANK_URL, json=json_body))
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 692, in _request
> self._check_response(json_response, response.headers, response.status_code)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 642, in _check_response
> raise CohereAPIError(
> **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
Code is Like
```python
retriever = vectorstore.as_retriever()
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)
return compression_retriever
```
### Expected behavior
**no error throws** and return empty list | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5304/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5303 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5303/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5303/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5303/events | https://github.com/langchain-ai/langchain/pull/5303 | 1,727,839,533 | PR_kwDOIPDwls5Rc8Hs | 5,303 | RFC: llm / chat model tabs | {
"login": "dev2049",
"id": 130488702,
"node_id": "U_kgDOB8cZfg",
"avatar_url": "https://avatars.githubusercontent.com/u/130488702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dev2049",
"html_url": "https://github.com/dev2049",
"followers_url": "https://api.github.com/users/dev2049/followers",
"following_url": "https://api.github.com/users/dev2049/following{/other_user}",
"gists_url": "https://api.github.com/users/dev2049/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dev2049/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dev2049/subscriptions",
"organizations_url": "https://api.github.com/users/dev2049/orgs",
"repos_url": "https://api.github.com/users/dev2049/repos",
"events_url": "https://api.github.com/users/dev2049/events{/privacy}",
"received_events_url": "https://api.github.com/users/dev2049/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4678528810,
"node_id": "LA_kwDOIPDwls8AAAABFtyvKg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 0 | 2023-05-26T15:31:41 | 2023-06-22T08:20:00 | 2023-06-22T08:20:00 | CONTRIBUTOR | null | only in quickstart atm but could do in other places as well
https://python.langchain.com/en/dev2049-combine_quickstart/getting_started/getting_started.html#prompt-templates
<img width="927" alt="Screenshot 2023-05-26 at 3 17 59 AM" src="https://github.com/hwchase17/langchain/assets/130488702/a0daf86b-42aa-42f6-a491-0bae607fe85b">
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5303/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5303/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5303",
"html_url": "https://github.com/langchain-ai/langchain/pull/5303",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5303.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5303.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5302 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5302/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5302/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5302/events | https://github.com/langchain-ai/langchain/pull/5302 | 1,727,837,946 | PR_kwDOIPDwls5Rc7yR | 5,302 | bump 181 | {
"login": "dev2049",
"id": 130488702,
"node_id": "U_kgDOB8cZfg",
"avatar_url": "https://avatars.githubusercontent.com/u/130488702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dev2049",
"html_url": "https://github.com/dev2049",
"followers_url": "https://api.github.com/users/dev2049/followers",
"following_url": "https://api.github.com/users/dev2049/following{/other_user}",
"gists_url": "https://api.github.com/users/dev2049/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dev2049/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dev2049/subscriptions",
"organizations_url": "https://api.github.com/users/dev2049/orgs",
"repos_url": "https://api.github.com/users/dev2049/repos",
"events_url": "https://api.github.com/users/dev2049/events{/privacy}",
"received_events_url": "https://api.github.com/users/dev2049/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5010622926,
"node_id": "LA_kwDOIPDwls8AAAABKqgJzg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/release",
"name": "release",
"color": "07D4BE",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 0 | 2023-05-26T15:30:28 | 2023-05-26T15:44:20 | 2023-05-26T15:44:19 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5302/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5302",
"html_url": "https://github.com/langchain-ai/langchain/pull/5302",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5302.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5302.patch",
"merged_at": "2023-05-26T15:44:19"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5301 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5301/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5301/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5301/events | https://github.com/langchain-ai/langchain/issues/5301 | 1,727,836,402 | I_kwDOIPDwls5m_LDy | 5,301 | Vertex ChatVertexAI() doesn't support initialize_agent() as OutputParserException error | {
"login": "axacheng",
"id": 5593412,
"node_id": "MDQ6VXNlcjU1OTM0MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5593412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/axacheng",
"html_url": "https://github.com/axacheng",
"followers_url": "https://api.github.com/users/axacheng/followers",
"following_url": "https://api.github.com/users/axacheng/following{/other_user}",
"gists_url": "https://api.github.com/users/axacheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/axacheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/axacheng/subscriptions",
"organizations_url": "https://api.github.com/users/axacheng/orgs",
"repos_url": "https://api.github.com/users/axacheng/repos",
"events_url": "https://api.github.com/users/axacheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/axacheng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-26T15:29:14 | 2023-09-10T16:11:32 | 2023-09-10T16:11:31 | NONE | null | ### System Info
google-cloud-aiplatform==1.25.0
langchain==0.0.180
python 3.11
### Who can help?
@dev2049
@Jflick58
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
question1 = "I am axa, I'm a 2 months old baby.."
question2 = "I like eating 🍌 🍉 🫐 but dislike 🥑"
question3 = "what is my name?"
question4 = "Do i disklike 🍌?"
agent_chain = initialize_agent(
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
tools=[],
llm=llm,
verbose=True,
max_iterations=3,
memory=ConversationBufferMemory(
memory_key="chat_history", return_messages=True),
)
agent_chain.run(input=question1)
agent_chain.run(input=question2)
agent_chain.run(input=question3)
agent_chain.run(input=question4)
File "/Users/axa/workspace/h/default/genai_learning/post/api/app/routes/v1/quiz_chat.py", line 271, in ask
agent_chain.run(input=question1)
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 239, in run
return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 140, in __call__
raise e
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 134, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 951, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 773, in _take_next_step
raise e
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 762, in _take_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 444, in plan
return self.output_parser.parse(full_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/axa/workspace/h/default/genai_learning/post/.venv/lib/python3.11/site-packages/langchain/agents/conversational/output_parser.py", line 23, in parse
raise OutputParserException(f"Could not parse LLM output: `{text}`")
langchain.schema.OutputParserException: **Could not parse LLM output: `Hi Axa, it's nice to meet you! I'm Bard, a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.**
### Expected behavior
When I used same code but ChatOpenAI() it worked perfectly.
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: Hello Axa! As an AI language model, I'm not able to see or interact with you physically, but I'm here to assist you with any questions or topics you might have. How can I assist you today?
> Finished chain.
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: It's great to hear that you enjoy eating bananas, watermelons, and blueberries! However, it's understandable that you might not like avocados. Everyone has their own preferences when it comes to food. Is there anything else you would like to discuss or ask about?
> Finished chain.
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: Your name is Axa, as you mentioned earlier.
> Finished chain.
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? No
AI: You did not mention that you dislike bananas, so I cannot say for sure. However, based on your previous message, it seems that you enjoy eating bananas.
> Finished chain.
INFO: 127.0.0.1:57044 - "POST /api/v1/quiz/ask HTTP/1.1" 200 OK | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5301/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5300 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5300/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5300/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5300/events | https://github.com/langchain-ai/langchain/issues/5300 | 1,727,780,002 | I_kwDOIPDwls5m-9Si | 5,300 | Weaviate: Add support for other vectorizers in hybrid search | {
"login": "hsm207",
"id": 2398765,
"node_id": "MDQ6VXNlcjIzOTg3NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2398765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hsm207",
"html_url": "https://github.com/hsm207",
"followers_url": "https://api.github.com/users/hsm207/followers",
"following_url": "https://api.github.com/users/hsm207/following{/other_user}",
"gists_url": "https://api.github.com/users/hsm207/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hsm207/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsm207/subscriptions",
"organizations_url": "https://api.github.com/users/hsm207/orgs",
"repos_url": "https://api.github.com/users/hsm207/repos",
"events_url": "https://api.github.com/users/hsm207/events{/privacy}",
"received_events_url": "https://api.github.com/users/hsm207/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2023-05-26T14:54:53 | 2023-09-18T16:10:45 | 2023-09-18T16:10:45 | CONTRIBUTOR | null | ### Feature request
We should add support for the following vectorizers in the [weaviate hybrid search](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html):
1. [cohere](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-cohere)
2. [palm](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-cohere)
3. [huggingface](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-cohere)
### Motivation
more flexibility to users
### Your contribution
code review | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5300/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5300/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5299 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5299/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5299/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5299/events | https://github.com/langchain-ai/langchain/issues/5299 | 1,727,665,260 | I_kwDOIPDwls5m-hRs | 5,299 | Failure to run docpage examples | {
"login": "juan-carvajal",
"id": 39658930,
"node_id": "MDQ6VXNlcjM5NjU4OTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/39658930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juan-carvajal",
"html_url": "https://github.com/juan-carvajal",
"followers_url": "https://api.github.com/users/juan-carvajal/followers",
"following_url": "https://api.github.com/users/juan-carvajal/following{/other_user}",
"gists_url": "https://api.github.com/users/juan-carvajal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juan-carvajal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juan-carvajal/subscriptions",
"organizations_url": "https://api.github.com/users/juan-carvajal/orgs",
"repos_url": "https://api.github.com/users/juan-carvajal/repos",
"events_url": "https://api.github.com/users/juan-carvajal/events{/privacy}",
"received_events_url": "https://api.github.com/users/juan-carvajal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-26T13:48:05 | 2023-06-05T08:51:01 | 2023-05-29T13:46:33 | NONE | null | ### System Info
Version: 0.0.180
Python: 3.10.11
OS: MacOs Monterrey 12.5.1 (Apple Silicon)
Steps to reproduce:
```
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("What is EPAM price in NYSE? What is that number raised to the 0.23 power?")
```
Output:
```
{
"name": "OutputParserException",
"message": "Could not parse LLM output: Thought: I need to use a search engine to find the current price of EPAM on NYSE and a calculator to raise it to the 0.23 power.\n\nAction:\n```\n{\n \"action\": \"Search\",\n \"action_input\": \"EPAM NYSE price\"\n}\n```\n\n",
"stack": "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[0;31mJSONDecodeError\u001b[0m Traceback (most recent call last)\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/chat/output_parser.py:21\u001b[0m, in \u001b[0;36mChatOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 20\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[0;32m---> 21\u001b[0m response \u001b[39m=\u001b[39m parse_json_markdown(text)\n\u001b[1;32m 22\u001b[0m \u001b[39mreturn\u001b[39;00m AgentAction(response[\u001b[39m\"\u001b[39m\u001b[39maction\u001b[39m\u001b[39m\"\u001b[39m], response[\u001b[39m\"\u001b[39m\u001b[39maction_input\u001b[39m\u001b[39m\"\u001b[39m], text)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/output_parsers/json.py:17\u001b[0m, in \u001b[0;36mparse_json_markdown\u001b[0;34m(json_string)\u001b[0m\n\u001b[1;32m 16\u001b[0m \u001b[39m# Parse the JSON string into a Python dictionary\u001b[39;00m\n\u001b[0;32m---> 17\u001b[0m parsed \u001b[39m=\u001b[39m json\u001b[39m.\u001b[39;49mloads(json_string)\n\u001b[1;32m 19\u001b[0m \u001b[39mreturn\u001b[39;00m parsed\n\nFile \u001b[0;32m/opt/homebrew/Cellar/[email protected]/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py:346\u001b[0m, in \u001b[0;36mloads\u001b[0;34m(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\u001b[0m\n\u001b[1;32m 343\u001b[0m \u001b[39mif\u001b[39;00m (\u001b[39mcls\u001b[39m \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m object_hook \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m\n\u001b[1;32m 344\u001b[0m parse_int \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m parse_float \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m\n\u001b[1;32m 345\u001b[0m parse_constant \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m object_pairs_hook \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m \u001b[39mand\u001b[39;00m \u001b[39mnot\u001b[39;00m kw):\n\u001b[0;32m--> 346\u001b[0m \u001b[39mreturn\u001b[39;00m _default_decoder\u001b[39m.\u001b[39;49mdecode(s)\n\u001b[1;32m 347\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mcls\u001b[39m \u001b[39mis\u001b[39;00m \u001b[39mNone\u001b[39;00m:\n\nFile \u001b[0;32m/opt/homebrew/Cellar/[email protected]/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py:337\u001b[0m, in \u001b[0;36mJSONDecoder.decode\u001b[0;34m(self, s, _w)\u001b[0m\n\u001b[1;32m 333\u001b[0m \u001b[39m\u001b[39m\u001b[39m\"\"\"Return the Python representation of ``s`` (a ``str`` instance\u001b[39;00m\n\u001b[1;32m 334\u001b[0m \u001b[39mcontaining a JSON document).\u001b[39;00m\n\u001b[1;32m 335\u001b[0m \n\u001b[1;32m 336\u001b[0m \u001b[39m\"\"\"\u001b[39;00m\n\u001b[0;32m--> 337\u001b[0m obj, end \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mraw_decode(s, idx\u001b[39m=\u001b[39;49m_w(s, \u001b[39m0\u001b[39;49m)\u001b[39m.\u001b[39;49mend())\n\u001b[1;32m 338\u001b[0m end \u001b[39m=\u001b[39m _w(s, end)\u001b[39m.\u001b[39mend()\n\nFile \u001b[0;32m/opt/homebrew/Cellar/[email protected]/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py:355\u001b[0m, in \u001b[0;36mJSONDecoder.raw_decode\u001b[0;34m(self, s, idx)\u001b[0m\n\u001b[1;32m 354\u001b[0m \u001b[39mexcept\u001b[39;00m \u001b[39mStopIteration\u001b[39;00m \u001b[39mas\u001b[39;00m err:\n\u001b[0;32m--> 355\u001b[0m \u001b[39mraise\u001b[39;00m JSONDecodeError(\u001b[39m\"\u001b[39m\u001b[39mExpecting value\u001b[39m\u001b[39m\"\u001b[39m, s, err\u001b[39m.\u001b[39mvalue) \u001b[39mfrom\u001b[39;00m \u001b[39mNone\u001b[39;00m\n\u001b[1;32m 356\u001b[0m \u001b[39mreturn\u001b[39;00m obj, end\n\n\u001b[0;31mJSONDecodeError\u001b[0m: Expecting value: line 1 column 1 (char 0)\n\nDuring handling of the above exception, another exception occurred:\n\n\u001b[0;31mOutputParserException\u001b[0m Traceback (most recent call last)\nCell \u001b[0;32mIn[13], line 19\u001b[0m\n\u001b[1;32m 16\u001b[0m agent \u001b[39m=\u001b[39m initialize_agent(tools, chat, agent\u001b[39m=\u001b[39mAgentType\u001b[39m.\u001b[39mCHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose\u001b[39m=\u001b[39m\u001b[39mTrue\u001b[39;00m)\n\u001b[1;32m 18\u001b[0m \u001b[39m# Now let's test it out!\u001b[39;00m\n\u001b[0;32m---> 19\u001b[0m agent\u001b[39m.\u001b[39;49mrun(\u001b[39m\"\u001b[39;49m\u001b[39mWhat is EPAM price in NYSE? What is that number raised to the 0.23 power?\u001b[39;49m\u001b[39m\"\u001b[39;49m)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/chains/base.py:236\u001b[0m, in \u001b[0;36mChain.run\u001b[0;34m(self, callbacks, *args, **kwargs)\u001b[0m\n\u001b[1;32m 234\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mlen\u001b[39m(args) \u001b[39m!=\u001b[39m \u001b[39m1\u001b[39m:\n\u001b[1;32m 235\u001b[0m \u001b[39mraise\u001b[39;00m \u001b[39mValueError\u001b[39;00m(\u001b[39m\"\u001b[39m\u001b[39m`run` supports only one positional argument.\u001b[39m\u001b[39m\"\u001b[39m)\n\u001b[0;32m--> 236\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m(args[\u001b[39m0\u001b[39;49m], callbacks\u001b[39m=\u001b[39;49mcallbacks)[\u001b[39mself\u001b[39m\u001b[39m.\u001b[39moutput_keys[\u001b[39m0\u001b[39m]]\n\u001b[1;32m 238\u001b[0m \u001b[39mif\u001b[39;00m kwargs \u001b[39mand\u001b[39;00m \u001b[39mnot\u001b[39;00m args:\n\u001b[1;32m 239\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m(kwargs, callbacks\u001b[39m=\u001b[39mcallbacks)[\u001b[39mself\u001b[39m\u001b[39m.\u001b[39moutput_keys[\u001b[39m0\u001b[39m]]\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/chains/base.py:140\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs, callbacks)\u001b[0m\n\u001b[1;32m 138\u001b[0m \u001b[39mexcept\u001b[39;00m (\u001b[39mKeyboardInterrupt\u001b[39;00m, \u001b[39mException\u001b[39;00m) \u001b[39mas\u001b[39;00m e:\n\u001b[1;32m 139\u001b[0m run_manager\u001b[39m.\u001b[39mon_chain_error(e)\n\u001b[0;32m--> 140\u001b[0m \u001b[39mraise\u001b[39;00m e\n\u001b[1;32m 141\u001b[0m run_manager\u001b[39m.\u001b[39mon_chain_end(outputs)\n\u001b[1;32m 142\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mprep_outputs(inputs, outputs, return_only_outputs)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/chains/base.py:134\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs, callbacks)\u001b[0m\n\u001b[1;32m 128\u001b[0m run_manager \u001b[39m=\u001b[39m callback_manager\u001b[39m.\u001b[39mon_chain_start(\n\u001b[1;32m 129\u001b[0m {\u001b[39m\"\u001b[39m\u001b[39mname\u001b[39m\u001b[39m\"\u001b[39m: \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m\u001b[39m__class__\u001b[39m\u001b[39m.\u001b[39m\u001b[39m__name__\u001b[39m},\n\u001b[1;32m 130\u001b[0m inputs,\n\u001b[1;32m 131\u001b[0m )\n\u001b[1;32m 132\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m 133\u001b[0m outputs \u001b[39m=\u001b[39m (\n\u001b[0;32m--> 134\u001b[0m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_call(inputs, run_manager\u001b[39m=\u001b[39;49mrun_manager)\n\u001b[1;32m 135\u001b[0m \u001b[39mif\u001b[39;00m new_arg_supported\n\u001b[1;32m 136\u001b[0m \u001b[39melse\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_call(inputs)\n\u001b[1;32m 137\u001b[0m )\n\u001b[1;32m 138\u001b[0m \u001b[39mexcept\u001b[39;00m (\u001b[39mKeyboardInterrupt\u001b[39;00m, \u001b[39mException\u001b[39;00m) \u001b[39mas\u001b[39;00m e:\n\u001b[1;32m 139\u001b[0m run_manager\u001b[39m.\u001b[39mon_chain_error(e)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:951\u001b[0m, in \u001b[0;36mAgentExecutor._call\u001b[0;34m(self, inputs, run_manager)\u001b[0m\n\u001b[1;32m 949\u001b[0m \u001b[39m# We now enter the agent loop (until it returns something).\u001b[39;00m\n\u001b[1;32m 950\u001b[0m \u001b[39mwhile\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_should_continue(iterations, time_elapsed):\n\u001b[0;32m--> 951\u001b[0m next_step_output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_take_next_step(\n\u001b[1;32m 952\u001b[0m name_to_tool_map,\n\u001b[1;32m 953\u001b[0m color_mapping,\n\u001b[1;32m 954\u001b[0m inputs,\n\u001b[1;32m 955\u001b[0m intermediate_steps,\n\u001b[1;32m 956\u001b[0m run_manager\u001b[39m=\u001b[39;49mrun_manager,\n\u001b[1;32m 957\u001b[0m )\n\u001b[1;32m 958\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39misinstance\u001b[39m(next_step_output, AgentFinish):\n\u001b[1;32m 959\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39m_return(\n\u001b[1;32m 960\u001b[0m next_step_output, intermediate_steps, run_manager\u001b[39m=\u001b[39mrun_manager\n\u001b[1;32m 961\u001b[0m )\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:773\u001b[0m, in \u001b[0;36mAgentExecutor._take_next_step\u001b[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001b[0m\n\u001b[1;32m 771\u001b[0m raise_error \u001b[39m=\u001b[39m \u001b[39mFalse\u001b[39;00m\n\u001b[1;32m 772\u001b[0m \u001b[39mif\u001b[39;00m raise_error:\n\u001b[0;32m--> 773\u001b[0m \u001b[39mraise\u001b[39;00m e\n\u001b[1;32m 774\u001b[0m text \u001b[39m=\u001b[39m \u001b[39mstr\u001b[39m(e)\n\u001b[1;32m 775\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39misinstance\u001b[39m(\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mhandle_parsing_errors, \u001b[39mbool\u001b[39m):\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:762\u001b[0m, in \u001b[0;36mAgentExecutor._take_next_step\u001b[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001b[0m\n\u001b[1;32m 756\u001b[0m \u001b[39m\u001b[39m\u001b[39m\"\"\"Take a single step in the thought-action-observation loop.\u001b[39;00m\n\u001b[1;32m 757\u001b[0m \n\u001b[1;32m 758\u001b[0m \u001b[39mOverride this to take control of how the agent makes and acts on choices.\u001b[39;00m\n\u001b[1;32m 759\u001b[0m \u001b[39m\"\"\"\u001b[39;00m\n\u001b[1;32m 760\u001b[0m \u001b[39mtry\u001b[39;00m:\n\u001b[1;32m 761\u001b[0m \u001b[39m# Call the LLM to see what to do.\u001b[39;00m\n\u001b[0;32m--> 762\u001b[0m output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49magent\u001b[39m.\u001b[39;49mplan(\n\u001b[1;32m 763\u001b[0m intermediate_steps,\n\u001b[1;32m 764\u001b[0m callbacks\u001b[39m=\u001b[39;49mrun_manager\u001b[39m.\u001b[39;49mget_child() \u001b[39mif\u001b[39;49;00m run_manager \u001b[39melse\u001b[39;49;00m \u001b[39mNone\u001b[39;49;00m,\n\u001b[1;32m 765\u001b[0m \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49minputs,\n\u001b[1;32m 766\u001b[0m )\n\u001b[1;32m 767\u001b[0m \u001b[39mexcept\u001b[39;00m OutputParserException \u001b[39mas\u001b[39;00m e:\n\u001b[1;32m 768\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39misinstance\u001b[39m(\u001b[39mself\u001b[39m\u001b[39m.\u001b[39mhandle_parsing_errors, \u001b[39mbool\u001b[39m):\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/agent.py:444\u001b[0m, in \u001b[0;36mAgent.plan\u001b[0;34m(self, intermediate_steps, callbacks, **kwargs)\u001b[0m\n\u001b[1;32m 442\u001b[0m full_inputs \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mget_full_inputs(intermediate_steps, \u001b[39m*\u001b[39m\u001b[39m*\u001b[39mkwargs)\n\u001b[1;32m 443\u001b[0m full_output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mllm_chain\u001b[39m.\u001b[39mpredict(callbacks\u001b[39m=\u001b[39mcallbacks, \u001b[39m*\u001b[39m\u001b[39m*\u001b[39mfull_inputs)\n\u001b[0;32m--> 444\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49moutput_parser\u001b[39m.\u001b[39;49mparse(full_output)\n\nFile \u001b[0;32m~/Projects/epam/python/langchain/env/lib/python3.10/site-packages/langchain/agents/chat/output_parser.py:25\u001b[0m, in \u001b[0;36mChatOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 22\u001b[0m \u001b[39mreturn\u001b[39;00m AgentAction(response[\u001b[39m\"\u001b[39m\u001b[39maction\u001b[39m\u001b[39m\"\u001b[39m], response[\u001b[39m\"\u001b[39m\u001b[39maction_input\u001b[39m\u001b[39m\"\u001b[39m], text)\n\u001b[1;32m 24\u001b[0m \u001b[39mexcept\u001b[39;00m \u001b[39mException\u001b[39;00m:\n\u001b[0;32m---> 25\u001b[0m \u001b[39mraise\u001b[39;00m OutputParserException(\u001b[39mf\u001b[39m\u001b[39m\"\u001b[39m\u001b[39mCould not parse LLM output: \u001b[39m\u001b[39m{\u001b[39;00mtext\u001b[39m}\u001b[39;00m\u001b[39m\"\u001b[39m)\n\n\u001b[0;31mOutputParserException\u001b[0m: Could not parse LLM output: Thought: I need to use a search engine to find the current price of EPAM on NYSE and a calculator to raise it to the 0.23 power.\n\nAction:\n```\n{\n \"action\": \"Search\",\n \"action_input\": \"EPAM NYSE price\"\n}\n```\n\n"
}
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("What is EPAM price in NYSE? What is that number raised to the 0.23 power?")
```
### Expected behavior
Should work | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5299/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5297 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5297/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5297/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5297/events | https://github.com/langchain-ai/langchain/pull/5297 | 1,727,534,737 | PR_kwDOIPDwls5Rb587 | 5,297 | Fixed passing creds to VertexAI LLM | {
"login": "lkuligin",
"id": 11026406,
"node_id": "MDQ6VXNlcjExMDI2NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/11026406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkuligin",
"html_url": "https://github.com/lkuligin",
"followers_url": "https://api.github.com/users/lkuligin/followers",
"following_url": "https://api.github.com/users/lkuligin/following{/other_user}",
"gists_url": "https://api.github.com/users/lkuligin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkuligin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkuligin/subscriptions",
"organizations_url": "https://api.github.com/users/lkuligin/orgs",
"repos_url": "https://api.github.com/users/lkuligin/repos",
"events_url": "https://api.github.com/users/lkuligin/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkuligin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-26T12:28:45 | 2023-05-26T15:32:55 | 2023-05-26T15:31:03 | CONTRIBUTOR | null | # Fixed passing creds to VertexAI LLM
Fixes #5279
It looks like we should drop a type annotation for Credentials.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5297/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5297",
"html_url": "https://github.com/langchain-ai/langchain/pull/5297",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5297.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5297.patch",
"merged_at": "2023-05-26T15:31:02"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5296 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5296/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5296/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5296/events | https://github.com/langchain-ai/langchain/issues/5296 | 1,727,514,993 | I_kwDOIPDwls5m98lx | 5,296 | APIConnectionError: Error communicating with OpenAI. | {
"login": "AvikantSrivastava",
"id": 41926176,
"node_id": "MDQ6VXNlcjQxOTI2MTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/41926176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AvikantSrivastava",
"html_url": "https://github.com/AvikantSrivastava",
"followers_url": "https://api.github.com/users/AvikantSrivastava/followers",
"following_url": "https://api.github.com/users/AvikantSrivastava/following{/other_user}",
"gists_url": "https://api.github.com/users/AvikantSrivastava/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AvikantSrivastava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AvikantSrivastava/subscriptions",
"organizations_url": "https://api.github.com/users/AvikantSrivastava/orgs",
"repos_url": "https://api.github.com/users/AvikantSrivastava/repos",
"events_url": "https://api.github.com/users/AvikantSrivastava/events{/privacy}",
"received_events_url": "https://api.github.com/users/AvikantSrivastava/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 11 | 2023-05-26T12:14:47 | 2023-11-08T10:22:13 | null | NONE | null | ### System Info
`python 3.11`
```
fastapi==0.95.1
langchain==0.0.180
pydantic==1.10.7
uvicorn==0.21.1
openai==0.27.4
```
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
I am trying to create a streaming endpoint in Fast API, below are the files
`main.py`
```python
from fastapi import FastAPI
from src.chat_stream import ChatOpenAIStreamingResponse, send_message, StreamRequest
app = FastAPI()
@app.post("/chat_streaming", response_class=StreamingResponse)
async def chat(body: StreamRequest ):
return ChatOpenAIStreamingResponse(send_message(body.message), media_type="text/event-stream")
```
`src/chat_stream.py`
```python
from typing import Awaitable, Callable, Union
Sender = Callable[[Union[str, bytes]], Awaitable[None]]
from starlette.types import Send
from typing import Any, Optional, Awaitable, Callable, Iterator, Union
from langchain.schema import HumanMessage
from pydantic import BaseModel
from fastapi.responses import StreamingResponse
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import AsyncCallbackHandler
from langchain.callbacks.manager import AsyncCallbackManager
class EmptyIterator(Iterator[Union[str, bytes]]):
def __iter__(self):
return self
def __next__(self):
raise StopIteration
class AsyncStreamCallbackHandler(AsyncCallbackHandler):
"""Callback handler for streaming, inheritance from AsyncCallbackHandler."""
def __init__(self, send: Sender):
super().__init__()
self.send = send
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Rewrite on_llm_new_token to send token to client."""
await self.send(f"data: {token}\n\n")
class ChatOpenAIStreamingResponse(StreamingResponse):
"""Streaming response for openai chat model, inheritance from StreamingResponse."""
def __init__(
self,
generate: Callable[[Sender], Awaitable[None]],
status_code: int = 200,
media_type: Optional[str] = None,
) -> None:
super().__init__(
content=EmptyIterator(), status_code=status_code, media_type=media_type
)
self.generate = generate
async def stream_response(self, send: Send) -> None:
"""Rewrite stream_response to send response to client."""
await send(
{
"type": "http.response.start",
"status": self.status_code,
"headers": self.raw_headers,
}
)
async def send_chunk(chunk: Union[str, bytes]):
if not isinstance(chunk, bytes):
chunk = chunk.encode(self.charset)
await send({"type": "http.response.body", "body": chunk, "more_body": True})
# send body to client
await self.generate(send_chunk)
# send empty body to client to close connection
await send({"type": "http.response.body", "body": b"", "more_body": False})
def send_message(message: str) -> Callable[[Sender], Awaitable[None]]:
async def generate(send: Sender):
model = ChatOpenAI(
streaming=True,
verbose=True,
callback_manager=AsyncCallbackManager([AsyncStreamCallbackHandler(send)]),
)
await model.agenerate(messages=[[HumanMessage(content=message)]])
return generate
class StreamRequest(BaseModel):
"""Request body for streaming."""
message: str
```
### Expected behavior
The Endpoint should stream the response from LLM Chain, instead I am getting this error
```
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 2.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 16.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
```
```python
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 980, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1098, in create_connection
transport, protocol = await self._create_connection_transport(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1131, in _create_connection_transport
await waiter
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 577, in _on_handshake_complete
raise handshake_exc
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 559, in _do_handshake
self._sslobj.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 979, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 588, in arequest_raw
result = await session.request(**request_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request
conn = await self._connector.connect(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect
proto = await self._create_connection(req, traces, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
raise last_exc
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/aiohttp/connector.py", line 982, in _wrap_create_connection
raise ClientConnectorCertificateError(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host api.openai.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Project/venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 429, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/fastapi/applications.py", line 276, in __call__
await super().__call__(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Project/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Project/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/Project/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/routing.py", line 69, in app
await response(scope, receive, send)
File "/Project/venv/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__
async with anyio.create_task_group() as task_group:
File "/Project/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 662, in __aexit__
raise exceptions[0]
File "/Project/venv/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap
await func()
File "/Project/src/app.py", line 67, in stream_response
await self.generate(send_chunk)
File "/Project/src/app.py", line 80, in generate
await model.agenerate(messages=[[HumanMessage(content=message)]])
File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 63, in agenerate
results = await asyncio.gather(
^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 297, in _agenerate
async for stream_resp in await acompletion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 63, in acompletion_with_retry
return await _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 325, in iter
raise retry_exc.reraise()
^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 158, in reraise
raise self.last_attempt.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Project/venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 61, in _completion_with_retry
return await llm.client.acreate(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
return await super().acreate(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
response, _, api_key = await requestor.arequest(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 300, in arequest
result = await self.arequest_raw(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Project/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 605, in arequest_raw
raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI
``` | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5296/reactions",
"total_count": 13,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 9
} | https://api.github.com/repos/langchain-ai/langchain/issues/5296/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5295 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5295/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5295/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5295/events | https://github.com/langchain-ai/langchain/issues/5295 | 1,727,465,007 | I_kwDOIPDwls5m9wYv | 5,295 | Get the source document info with result | {
"login": "Sadaf-Syeda",
"id": 59745163,
"node_id": "MDQ6VXNlcjU5NzQ1MTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/59745163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sadaf-Syeda",
"html_url": "https://github.com/Sadaf-Syeda",
"followers_url": "https://api.github.com/users/Sadaf-Syeda/followers",
"following_url": "https://api.github.com/users/Sadaf-Syeda/following{/other_user}",
"gists_url": "https://api.github.com/users/Sadaf-Syeda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sadaf-Syeda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sadaf-Syeda/subscriptions",
"organizations_url": "https://api.github.com/users/Sadaf-Syeda/orgs",
"repos_url": "https://api.github.com/users/Sadaf-Syeda/repos",
"events_url": "https://api.github.com/users/Sadaf-Syeda/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sadaf-Syeda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-05-26T11:41:44 | 2023-10-23T16:08:27 | 2023-10-23T16:08:26 | NONE | null | ### System Info
ValueError: `run` not supported when there is not exactly one output key. Got ['result', 'source_documents'
]
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
qa = RetrievalQA.from_chain_type(
llm=OpenAI(),
chain_type = "stuff",
retriever=db.as_retriever(),
return_source_documents=True
)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)#, output_keys=['result','source_documents'])
agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=memory)# return_intermediate_steps=True)#, output_keys=['result','source_documents']
#)
### Expected behavior
Returns the answer and source doc as well | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5295/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/langchain-ai/langchain/issues/5295/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5294 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5294/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5294/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5294/events | https://github.com/langchain-ai/langchain/issues/5294 | 1,727,460,382 | I_kwDOIPDwls5m9vQe | 5,294 | Issue: security concerns with `exec()` via multiple agents and Shell tool | {
"login": "juppytt",
"id": 46583939,
"node_id": "MDQ6VXNlcjQ2NTgzOTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/46583939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juppytt",
"html_url": "https://github.com/juppytt",
"followers_url": "https://api.github.com/users/juppytt/followers",
"following_url": "https://api.github.com/users/juppytt/following{/other_user}",
"gists_url": "https://api.github.com/users/juppytt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juppytt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juppytt/subscriptions",
"organizations_url": "https://api.github.com/users/juppytt/orgs",
"repos_url": "https://api.github.com/users/juppytt/repos",
"events_url": "https://api.github.com/users/juppytt/events{/privacy}",
"received_events_url": "https://api.github.com/users/juppytt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-26T11:38:23 | 2023-09-26T16:06:39 | 2023-09-26T16:06:38 | CONTRIBUTOR | null | ### Issue you'd like to raise.
TL;DR: The use of exec() in agents can lead to remote code execution vulnerabilities. Some Huggingface projects use such agents, despite the potential harm of LLM-generated Python code.
#1026 and #814 discuss the security concerns regarding the use of `exec()` in llm_math chain. The comments in #1026 proposed methods to sandbox the code execution, but due to environmental issues, the code was patched to replace `exec()` with `numexpr.evaluate()` (#2943). This restricted the execution capabilities to mathematical functionalities only. This bug was assigned the CVE number CVE-2023-29374.
As shown in the above issues, the usage of `exec()` in a chain can pose a significant security risk, especially when the chain is running on a remote machine. This seems common scenario for projects in Huggingface.
However, in the latest langchain, `exec()` is still used in `PythonReplTool` and `PythonAstReplTool`.
https://github.com/hwchase17/langchain/blob/aec642febb3daa7dbb6a19996aac2efa92bbf1bd/langchain/tools/python/tool.py#L55
https://github.com/hwchase17/langchain/blob/aec642febb3daa7dbb6a19996aac2efa92bbf1bd/langchain/tools/python/tool.py#L102
These functions are called by Pandas Dataframe Agent, Spark Dataframe Agent, CSV Agent. It seems they are intentionally designed to pass the LLM output to `PythonAstTool` or `PythonAstReplTool` to execute the LLM-generated code in the machine.
The documentation for these agents explicitly states that they should be used with caution since LLM-generated Python code can be potentially harmful. For instance:
https://github.com/hwchase17/langchain/blob/aec642febb3daa7dbb6a19996aac2efa92bbf1bd/docs/modules/agents/toolkits/examples/pandas.ipynb#L12
Despite this, I have observed several projects in Huggingface using `create_pandas_dataframe_agent` and `create_csv_agent`.
### Suggestion:
Fixing this issue as done in llm_math chain seems challenging.
Simply restricting the LLM-generated code to Pandas and Spark execution might not be sufficient because there are still numerous malicious tasks that can be performed using those APIs. For instance, Pandas can read and write files.
Meanwhile, it seems crucial to emphasize the security concerns related to LLM-generated code for the overall security of LLM apps. Merely limiting execution to specific frameworks or APIs may not fully address the underlying security risks.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5294/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5294/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5293 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5293/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5293/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5293/events | https://github.com/langchain-ai/langchain/pull/5293 | 1,727,452,361 | PR_kwDOIPDwls5Rbnz_ | 5,293 | Implemented appending arbitrary messages | {
"login": "eavanvalkenburg",
"id": 13749212,
"node_id": "MDQ6VXNlcjEzNzQ5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/13749212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eavanvalkenburg",
"html_url": "https://github.com/eavanvalkenburg",
"followers_url": "https://api.github.com/users/eavanvalkenburg/followers",
"following_url": "https://api.github.com/users/eavanvalkenburg/following{/other_user}",
"gists_url": "https://api.github.com/users/eavanvalkenburg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eavanvalkenburg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eavanvalkenburg/subscriptions",
"organizations_url": "https://api.github.com/users/eavanvalkenburg/orgs",
"repos_url": "https://api.github.com/users/eavanvalkenburg/repos",
"events_url": "https://api.github.com/users/eavanvalkenburg/events{/privacy}",
"received_events_url": "https://api.github.com/users/eavanvalkenburg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 3 | 2023-05-26T11:33:01 | 2023-05-29T17:37:37 | 2023-05-29T14:19:00 | CONTRIBUTOR | null | # Implemented appending arbitrary messages to the base chat message history, the in-memory and cosmos ones.
<!--
Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution.
Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change.
After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost.
-->
As discussed this is the alternative way instead of #4480, with a add_message method added that takes a BaseMessage as input, so that the user can control what is in the base message like kwargs.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
<!-- If you're adding a new integration, include an integration test and an example notebook showing its use! -->
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
@hwchase17 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5293/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5293",
"html_url": "https://github.com/langchain-ai/langchain/pull/5293",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5293.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5293.patch",
"merged_at": "2023-05-29T14:19:00"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5292 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5292/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5292/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5292/events | https://github.com/langchain-ai/langchain/pull/5292 | 1,727,439,533 | PR_kwDOIPDwls5Rbk6A | 5,292 | added cosmos kwargs option | {
"login": "eavanvalkenburg",
"id": 13749212,
"node_id": "MDQ6VXNlcjEzNzQ5MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/13749212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eavanvalkenburg",
"html_url": "https://github.com/eavanvalkenburg",
"followers_url": "https://api.github.com/users/eavanvalkenburg/followers",
"following_url": "https://api.github.com/users/eavanvalkenburg/following{/other_user}",
"gists_url": "https://api.github.com/users/eavanvalkenburg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eavanvalkenburg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eavanvalkenburg/subscriptions",
"organizations_url": "https://api.github.com/users/eavanvalkenburg/orgs",
"repos_url": "https://api.github.com/users/eavanvalkenburg/repos",
"events_url": "https://api.github.com/users/eavanvalkenburg/events{/privacy}",
"received_events_url": "https://api.github.com/users/eavanvalkenburg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-26T11:25:55 | 2023-06-30T06:38:35 | 2023-05-28T04:19:41 | CONTRIBUTOR | null | # Added the ability to pass kwargs to cosmos client constructor
The cosmos client has a ton of options that can be set, so allowing those to be passed to the constructor from the chat memory constructor with this PR. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5292/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5292",
"html_url": "https://github.com/langchain-ai/langchain/pull/5292",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5292.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5292.patch",
"merged_at": "2023-05-28T04:19:41"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5291 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5291/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5291/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5291/events | https://github.com/langchain-ai/langchain/pull/5291 | 1,727,389,177 | PR_kwDOIPDwls5RbZsY | 5,291 | Add an example to make the prompt more robust | {
"login": "pengqu123",
"id": 82029664,
"node_id": "MDQ6VXNlcjgyMDI5NjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/82029664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pengqu123",
"html_url": "https://github.com/pengqu123",
"followers_url": "https://api.github.com/users/pengqu123/followers",
"following_url": "https://api.github.com/users/pengqu123/following{/other_user}",
"gists_url": "https://api.github.com/users/pengqu123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pengqu123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pengqu123/subscriptions",
"organizations_url": "https://api.github.com/users/pengqu123/orgs",
"repos_url": "https://api.github.com/users/pengqu123/repos",
"events_url": "https://api.github.com/users/pengqu123/events{/privacy}",
"received_events_url": "https://api.github.com/users/pengqu123/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 0 | 2023-05-26T10:56:39 | 2023-05-26T13:32:36 | 2023-05-26T13:32:36 | CONTRIBUTOR | null | I am trying the opensource model for llm_math. and I possibly found a typical problem. chatgpt knows ** and always change a^b into a**b(chatgpt doesn't think ^ is xor), and opensource model prefer a^b. so if my quesion is 29^(1/5) or the fifth root of 29, I will get the error "_TypeError: unsupported operand type(s) for ^: 'int' and 'float'_'" from _"numexpr.evaluate("29^(1/5)")"_. Then I add this example, the model is able to solve questions like "a^b", "the square of a", "the fifth root of a" by using ** | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5291/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5291",
"html_url": "https://github.com/langchain-ai/langchain/pull/5291",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5291.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5291.patch",
"merged_at": "2023-05-26T13:32:36"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5290 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5290/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5290/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5290/events | https://github.com/langchain-ai/langchain/pull/5290 | 1,727,277,259 | PR_kwDOIPDwls5RbBCJ | 5,290 | Better docs for weaviate hybrid search | {
"login": "hsm207",
"id": 2398765,
"node_id": "MDQ6VXNlcjIzOTg3NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2398765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hsm207",
"html_url": "https://github.com/hsm207",
"followers_url": "https://api.github.com/users/hsm207/followers",
"following_url": "https://api.github.com/users/hsm207/following{/other_user}",
"gists_url": "https://api.github.com/users/hsm207/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hsm207/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsm207/subscriptions",
"organizations_url": "https://api.github.com/users/hsm207/orgs",
"repos_url": "https://api.github.com/users/hsm207/repos",
"events_url": "https://api.github.com/users/hsm207/events{/privacy}",
"received_events_url": "https://api.github.com/users/hsm207/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 0 | 2023-05-26T09:44:43 | 2023-05-26T16:52:25 | 2023-05-26T16:30:42 | CONTRIBUTOR | null | # Better docs for weaviate hybrid search
<!--
Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution.
Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change.
After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost.
-->
<!-- Remove if not applicable -->
Fixes: NA
## Before submitting
<!-- If you're adding a new integration, include an integration test and an example notebook showing its use! -->
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @vowelparrot
VectorStores / Retrievers / Memory
- @dev2049
-->
@dev2049 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5290/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5290",
"html_url": "https://github.com/langchain-ai/langchain/pull/5290",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5290.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5290.patch",
"merged_at": "2023-05-26T16:30:42"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5289 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5289/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5289/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5289/events | https://github.com/langchain-ai/langchain/issues/5289 | 1,727,217,498 | I_kwDOIPDwls5m8z9a | 5,289 | support history in LLMChain and LLM | {
"login": "xiaoyaolangzhi",
"id": 15037766,
"node_id": "MDQ6VXNlcjE1MDM3NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/15037766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaoyaolangzhi",
"html_url": "https://github.com/xiaoyaolangzhi",
"followers_url": "https://api.github.com/users/xiaoyaolangzhi/followers",
"following_url": "https://api.github.com/users/xiaoyaolangzhi/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaoyaolangzhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaoyaolangzhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaoyaolangzhi/subscriptions",
"organizations_url": "https://api.github.com/users/xiaoyaolangzhi/orgs",
"repos_url": "https://api.github.com/users/xiaoyaolangzhi/repos",
"events_url": "https://api.github.com/users/xiaoyaolangzhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaoyaolangzhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-26T09:11:16 | 2023-09-10T16:11:37 | 2023-09-10T16:11:36 | NONE | null | ### Feature request
It's very userful when use the history paremeter.
```
history = [("who are you", " i'm a ai")]
llm = OpenAI()
llm("hello", history)
llm = LLMChain({ llm, prompt })
llm({"query": "hello", "history": history})
```
### Motivation
*
### Your contribution
* | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5289/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5286 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5286/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5286/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5286/events | https://github.com/langchain-ai/langchain/issues/5286 | 1,727,005,383 | I_kwDOIPDwls5m8ALH | 5,286 | SQL chain generates extra add on question if I use ChatOpenAI inplace of OpenAI | {
"login": "charanhu",
"id": 52596822,
"node_id": "MDQ6VXNlcjUyNTk2ODIy",
"avatar_url": "https://avatars.githubusercontent.com/u/52596822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charanhu",
"html_url": "https://github.com/charanhu",
"followers_url": "https://api.github.com/users/charanhu/followers",
"following_url": "https://api.github.com/users/charanhu/following{/other_user}",
"gists_url": "https://api.github.com/users/charanhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/charanhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charanhu/subscriptions",
"organizations_url": "https://api.github.com/users/charanhu/orgs",
"repos_url": "https://api.github.com/users/charanhu/repos",
"events_url": "https://api.github.com/users/charanhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/charanhu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-05-26T06:48:08 | 2023-10-16T22:55:29 | 2023-09-26T16:06:43 | NONE | null | ### System Info
`import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
db = SQLDatabase.from_uri("sqlite:///data/data.db")
llm = ChatOpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
res = db_chain.run("total sales by each region?")
print(res)`
![image](https://github.com/hwchase17/langchain/assets/52596822/fbfa65bc-398b-43f1-a506-0e7a5cd57221)
But If I use text-davinici, It generates single result.
`import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
db = SQLDatabase.from_uri("sqlite:///data/data.db")
llm = OpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
res = db_chain.run("total sales by each region?")
print(res)`
![text-davinici](https://github.com/hwchase17/langchain/assets/52596822/1ccd746c-5951-42c1-85f6-1530827c1029)
how to overcome this issue in **"ChatOpenAI"**?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
db = SQLDatabase.from_uri("sqlite:///data/data.db")
llm = ChatOpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
res = db_chain.run("total sales by each region?")
print(res)
### Expected behavior
I need one answer that is for user input query only. But after the answer It again adding a question by itself. Extra add on question and query not needed in **ChatOpenAI** | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5286/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5284 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5284/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5284/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5284/events | https://github.com/langchain-ai/langchain/issues/5284 | 1,726,931,734 | I_kwDOIPDwls5m7uMW | 5,284 | change tuple sql result to dict sql result | {
"login": "BriskyGates",
"id": 34669414,
"node_id": "MDQ6VXNlcjM0NjY5NDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/34669414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BriskyGates",
"html_url": "https://github.com/BriskyGates",
"followers_url": "https://api.github.com/users/BriskyGates/followers",
"following_url": "https://api.github.com/users/BriskyGates/following{/other_user}",
"gists_url": "https://api.github.com/users/BriskyGates/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BriskyGates/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BriskyGates/subscriptions",
"organizations_url": "https://api.github.com/users/BriskyGates/orgs",
"repos_url": "https://api.github.com/users/BriskyGates/repos",
"events_url": "https://api.github.com/users/BriskyGates/events{/privacy}",
"received_events_url": "https://api.github.com/users/BriskyGates/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-26T05:27:36 | 2023-09-18T16:10:51 | 2023-09-18T16:10:50 | NONE | null | ### Feature request
Hey guys,thanks for your amazing work, if I want to get a dictionary SQL result instead of the default tuple in SQLDatabaseChain, what settings do I need to change?
### Motivation
Without database table header fields, the articles generated by LLM may contain errors.
### Your contribution
I am currently diving into the codes and see how to deal with it | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5284/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/langchain-ai/langchain/issues/5284/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5283 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5283/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5283/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5283/events | https://github.com/langchain-ai/langchain/issues/5283 | 1,726,925,790 | I_kwDOIPDwls5m7sve | 5,283 | Stop logic should be optimezed to be compatible with "Conversation 1:" | {
"login": "xulicong",
"id": 9657266,
"node_id": "MDQ6VXNlcjk2NTcyNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9657266?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xulicong",
"html_url": "https://github.com/xulicong",
"followers_url": "https://api.github.com/users/xulicong/followers",
"following_url": "https://api.github.com/users/xulicong/following{/other_user}",
"gists_url": "https://api.github.com/users/xulicong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xulicong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xulicong/subscriptions",
"organizations_url": "https://api.github.com/users/xulicong/orgs",
"repos_url": "https://api.github.com/users/xulicong/repos",
"events_url": "https://api.github.com/users/xulicong/events{/privacy}",
"received_events_url": "https://api.github.com/users/xulicong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-26T05:21:00 | 2023-09-10T16:11:42 | 2023-09-10T16:11:41 | NONE | null | ### System Info
windows
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The llm will generate the content below in some cases:
Action 1: xxx
Action Input 1: xxx
Observation 1: xxx
regex = (
r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
)
The regex above in langchian.mrkl.output_parser can match the Action and Action Input in the following scenario:
Action 1: xxx
Action Input 1: xxx
but the stop list is still be ['\nObservation:', '\n\tObservation:'] which can not stop the generation by llm, because the llm will generate the 'Observation 1: ... '.
### Expected behavior
Optimize stop logic to solve this problem | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5283/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5282 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5282/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5282/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5282/events | https://github.com/langchain-ai/langchain/pull/5282 | 1,726,921,671 | PR_kwDOIPDwls5RZ0zg | 5,282 | docs: `ecosystem/integrations` update 2 | {
"login": "leo-gan",
"id": 2256422,
"node_id": "MDQ6VXNlcjIyNTY0MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2256422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leo-gan",
"html_url": "https://github.com/leo-gan",
"followers_url": "https://api.github.com/users/leo-gan/followers",
"following_url": "https://api.github.com/users/leo-gan/following{/other_user}",
"gists_url": "https://api.github.com/users/leo-gan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leo-gan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leo-gan/subscriptions",
"organizations_url": "https://api.github.com/users/leo-gan/orgs",
"repos_url": "https://api.github.com/users/leo-gan/repos",
"events_url": "https://api.github.com/users/leo-gan/events{/privacy}",
"received_events_url": "https://api.github.com/users/leo-gan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-26T05:16:23 | 2023-05-29T15:24:39 | 2023-05-29T14:19:44 | COLLABORATOR | null | # docs: ecosystem/integrations update 2
#5219 - part 1
The second part of this update (parts are independent of each other! no overlap):
- added diffbot.md
- updated confluence.ipynb; added confluence.md
- updated college_confidential.md
- updated openai.md
- added blackboard.md
- added bilibili.md
- added azure_blob_storage.md
- added azlyrics.md
- added aws_s3.md
## Who can review?
@hwchase17@agola11
@agola11
@vowelparrot
@dev2049
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5282/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5282",
"html_url": "https://github.com/langchain-ai/langchain/pull/5282",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5282.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5282.patch",
"merged_at": "2023-05-29T14:19:44"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5281 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5281/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5281/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5281/events | https://github.com/langchain-ai/langchain/pull/5281 | 1,726,905,598 | PR_kwDOIPDwls5RZxT1 | 5,281 | Docs: Concise intro | {
"login": "dev2049",
"id": 130488702,
"node_id": "U_kgDOB8cZfg",
"avatar_url": "https://avatars.githubusercontent.com/u/130488702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dev2049",
"html_url": "https://github.com/dev2049",
"followers_url": "https://api.github.com/users/dev2049/followers",
"following_url": "https://api.github.com/users/dev2049/following{/other_user}",
"gists_url": "https://api.github.com/users/dev2049/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dev2049/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dev2049/subscriptions",
"organizations_url": "https://api.github.com/users/dev2049/orgs",
"repos_url": "https://api.github.com/users/dev2049/repos",
"events_url": "https://api.github.com/users/dev2049/events{/privacy}",
"received_events_url": "https://api.github.com/users/dev2049/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4678528810,
"node_id": "LA_kwDOIPDwls8AAAABFtyvKg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 1 | 2023-05-26T04:55:31 | 2023-06-22T08:20:05 | 2023-06-22T08:20:05 | CONTRIBUTOR | null | Make doc intro concise
https://python.langchain.com/en/dev2049-concise_get_started/ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5281/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5281",
"html_url": "https://github.com/langchain-ai/langchain/pull/5281",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5281.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5281.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5280 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5280/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5280/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5280/events | https://github.com/langchain-ai/langchain/issues/5280 | 1,726,903,889 | I_kwDOIPDwls5m7nZR | 5,280 | Sitemap - add filtering by modified date | {
"login": "rhabal",
"id": 1269282,
"node_id": "MDQ6VXNlcjEyNjkyODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1269282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rhabal",
"html_url": "https://github.com/rhabal",
"followers_url": "https://api.github.com/users/rhabal/followers",
"following_url": "https://api.github.com/users/rhabal/following{/other_user}",
"gists_url": "https://api.github.com/users/rhabal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rhabal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rhabal/subscriptions",
"organizations_url": "https://api.github.com/users/rhabal/orgs",
"repos_url": "https://api.github.com/users/rhabal/repos",
"events_url": "https://api.github.com/users/rhabal/events{/privacy}",
"received_events_url": "https://api.github.com/users/rhabal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-26T04:52:49 | 2023-09-10T16:11:47 | 2023-09-10T16:11:46 | NONE | null | ### Feature request
loader = SitemapLoader(
"https://langchain.readthedocs.io/sitemap.xml",
filter_modified_dates=["2023-", "2022-12-"]
)
documents = loader.load()
### Motivation
Provide enhanced filtering on larger sites
### Your contribution
Provide enhanced filtering on larger sites | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5280/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5279 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5279/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5279/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5279/events | https://github.com/langchain-ai/langchain/issues/5279 | 1,726,889,243 | I_kwDOIPDwls5m7j0b | 5,279 | Issue Passing in Credential to VertexAI model | {
"login": "jmho",
"id": 59701887,
"node_id": "MDQ6VXNlcjU5NzAxODg3",
"avatar_url": "https://avatars.githubusercontent.com/u/59701887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmho",
"html_url": "https://github.com/jmho",
"followers_url": "https://api.github.com/users/jmho/followers",
"following_url": "https://api.github.com/users/jmho/following{/other_user}",
"gists_url": "https://api.github.com/users/jmho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmho/subscriptions",
"organizations_url": "https://api.github.com/users/jmho/orgs",
"repos_url": "https://api.github.com/users/jmho/repos",
"events_url": "https://api.github.com/users/jmho/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-26T04:34:54 | 2023-05-26T15:31:04 | 2023-05-26T15:31:04 | NONE | null | ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
Have Google Cloud CLI and ran and logged in using `gcloud auth login`
Running locally and online in Google Colab
### Who can help?
@hwchase17 @hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/19QGMptiCn49fu4i5ZQ0ygfR74ktQFQlb?usp=sharing
Unexpected behavior`field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().` seems to only appear if you pass in any credenitial valid or invalid to the vertexai wrapper from langchain.
### The error
This code should not throw `field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().`. It should either not throw any errors, if the credentials, project_Id, and location are correct. Or, if there is an issue with one of params, it should throw a specific error from the `vertexai.init` call below but it doesn't seem to be reaching it if a credential is passed in.
```
vertexai.init(project=project_id,location=location,credentials=credentials,)
``` | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5279/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5278 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5278/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5278/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5278/events | https://github.com/langchain-ai/langchain/issues/5278 | 1,726,867,648 | I_kwDOIPDwls5m7ejA | 5,278 | VertexAI ChatModel implementation misses few-shot "examples" | {
"login": "bent-verbiage",
"id": 941131,
"node_id": "MDQ6VXNlcjk0MTEzMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/941131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bent-verbiage",
"html_url": "https://github.com/bent-verbiage",
"followers_url": "https://api.github.com/users/bent-verbiage/followers",
"following_url": "https://api.github.com/users/bent-verbiage/following{/other_user}",
"gists_url": "https://api.github.com/users/bent-verbiage/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bent-verbiage/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bent-verbiage/subscriptions",
"organizations_url": "https://api.github.com/users/bent-verbiage/orgs",
"repos_url": "https://api.github.com/users/bent-verbiage/repos",
"events_url": "https://api.github.com/users/bent-verbiage/events{/privacy}",
"received_events_url": "https://api.github.com/users/bent-verbiage/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-26T04:02:46 | 2023-09-15T22:13:02 | 2023-08-31T17:11:08 | NONE | null | ### System Info
langchain==0.0.180
python==3.10
google-cloud-aiplatform==1.25.0
### Who can help?
@hwc
### Information
- [] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Debatable if this is a bug or a missing feature, but I'd argue that the Vertex implementation is missing an important element -> Even though I'm excited to have the support now.
Using the [VertexAI documentation for chat](https://cloud.google.com/vertex-ai/docs/generative-ai/chat/test-chat-prompts), you can initialise the chat model like the below (emphasis mine).
The list of "examples" functions as a separate instruction (few-shot), not as part of the chat history. This is different from how OpenAI does it.
The current langchain implementation doesn't seem to have an option to submit examples, instead it combining all messages in the chat-history. That would lead to unexpected results if you used if for your examples.
```
def chat_question(context=None, examples=[], chat_instruction=None):
chat_model = ChatModel.from_pretrained("chat-bison@001")
parameters = {
"temperature": .0,
"max_output_tokens": 300,
"top_p": 0.3,
"top_k": 3,
}
chat = chat_model.start_chat(
context=context,
**examples=examples**
)
response = chat.send_message(chat_instruction, **parameters)
return response
```
### Expected behavior
Allow for a set of examples to be passed in when setting up the ChatVertexAI or when using the chat() function.
Apologies if I've missed a way to do this. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5278/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5278/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5277 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5277/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5277/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5277/events | https://github.com/langchain-ai/langchain/issues/5277 | 1,726,855,450 | I_kwDOIPDwls5m7bka | 5,277 | when i create ClientChatOpenAI error | {
"login": "LayneHu",
"id": 17290086,
"node_id": "MDQ6VXNlcjE3MjkwMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/17290086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LayneHu",
"html_url": "https://github.com/LayneHu",
"followers_url": "https://api.github.com/users/LayneHu/followers",
"following_url": "https://api.github.com/users/LayneHu/following{/other_user}",
"gists_url": "https://api.github.com/users/LayneHu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LayneHu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LayneHu/subscriptions",
"organizations_url": "https://api.github.com/users/LayneHu/orgs",
"repos_url": "https://api.github.com/users/LayneHu/repos",
"events_url": "https://api.github.com/users/LayneHu/events{/privacy}",
"received_events_url": "https://api.github.com/users/LayneHu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-26T03:42:55 | 2023-09-10T16:11:53 | 2023-09-10T16:11:52 | NONE | null | ### System Info
Cannot specify both model and engine
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1、i create ClientChatOpenAI the code like this:
```python
"""Azure OpenAI chat wrapper."""
from __future__ import annotations
import logging
from typing import Any, Dict
from pydantic import root_validator
from langchain.chat_models.openai import ChatOpenAI
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
class ClientChatOpenAI(ChatOpenAI):
deployment_name: str = ""
openai_api_base: str = ""
openai_api_key: str = ""
openai_organization: str = ""
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values,
"openai_api_key",
"OPENAI_API_KEY",
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
)
openai_organization = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai
openai.api_base = openai_api_base
openai.api_key = openai_api_key
if openai_organization:
openai.organization = openai_organization
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
return {
**super()._default_params,
"engine": self.deployment_name,
}
```
2.use code
```python
chat = ClientChatOpenAI(
temperature=0,
streaming=True,
openai_api_key=os.getenv("OPENAI_CONFIG_0_API_KEY"),
openai_api_base=os.getenv("OPENAI_CONFIG_0_END_POINT"),
)
batch_messages = [
[SystemMessage(content="你是ai助手."), HumanMessage(content=chat_request.prompts)],
]
result = chat.generate(batch_messages)
print(result.llm_output["token_usage"])
return result
```
### Expected behavior
i think code is ok | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5277/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5276 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5276/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5276/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5276/events | https://github.com/langchain-ai/langchain/issues/5276 | 1,726,853,363 | I_kwDOIPDwls5m7bDz | 5,276 | python SDK can't query documents from an existing collection | {
"login": "qlql489",
"id": 10767590,
"node_id": "MDQ6VXNlcjEwNzY3NTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/10767590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qlql489",
"html_url": "https://github.com/qlql489",
"followers_url": "https://api.github.com/users/qlql489/followers",
"following_url": "https://api.github.com/users/qlql489/following{/other_user}",
"gists_url": "https://api.github.com/users/qlql489/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qlql489/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qlql489/subscriptions",
"organizations_url": "https://api.github.com/users/qlql489/orgs",
"repos_url": "https://api.github.com/users/qlql489/repos",
"events_url": "https://api.github.com/users/qlql489/events{/privacy}",
"received_events_url": "https://api.github.com/users/qlql489/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-26T03:40:09 | 2023-06-01T00:28:00 | 2023-06-01T00:28:00 | NONE | null | ### Feature request
In the JS SDK of Milvus, there is a function to query documents from an existing collection, while in the Python SDK, this function is not available. Instead, the collection can be constructed using the following way:
```python
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={"host": "127.0.0.1", "port": "19530"},
)
```
### Motivation
I cannot ask multiple questions
### Your contribution
no | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5276/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5275 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5275/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5275/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5275/events | https://github.com/langchain-ai/langchain/issues/5275 | 1,726,669,439 | I_kwDOIPDwls5m6uJ_ | 5,275 | When initializing ChatVertexAI fastapi thread pool becomes unaccessible | {
"login": "bjk95",
"id": 41258773,
"node_id": "MDQ6VXNlcjQxMjU4Nzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/41258773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bjk95",
"html_url": "https://github.com/bjk95",
"followers_url": "https://api.github.com/users/bjk95/followers",
"following_url": "https://api.github.com/users/bjk95/following{/other_user}",
"gists_url": "https://api.github.com/users/bjk95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bjk95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bjk95/subscriptions",
"organizations_url": "https://api.github.com/users/bjk95/orgs",
"repos_url": "https://api.github.com/users/bjk95/repos",
"events_url": "https://api.github.com/users/bjk95/events{/privacy}",
"received_events_url": "https://api.github.com/users/bjk95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-26T00:48:15 | 2023-09-10T16:11:57 | 2023-09-10T16:11:57 | CONTRIBUTOR | null | ### System Info
When I initialise ChatVeretexAI in a fastAPI app the thread pool never returns to idle blocking the server returning the below error.
E0526 10:18:51.289447000 4300375424 thread_pool.cc:230] Waiting for thread pool to idle before forking
on langchain 0.180
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Initialise ChatVertexAI in a fastapi app. ChatOpenAI works fine.
### Expected behavior
Don't error | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5275/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5274 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5274/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5274/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5274/events | https://github.com/langchain-ai/langchain/pull/5274 | 1,726,644,384 | PR_kwDOIPDwls5RY4BL | 5,274 | added more examples for chatbots and question answering | {
"login": "mikayelh",
"id": 47828027,
"node_id": "MDQ6VXNlcjQ3ODI4MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/47828027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikayelh",
"html_url": "https://github.com/mikayelh",
"followers_url": "https://api.github.com/users/mikayelh/followers",
"following_url": "https://api.github.com/users/mikayelh/following{/other_user}",
"gists_url": "https://api.github.com/users/mikayelh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikayelh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikayelh/subscriptions",
"organizations_url": "https://api.github.com/users/mikayelh/orgs",
"repos_url": "https://api.github.com/users/mikayelh/repos",
"events_url": "https://api.github.com/users/mikayelh/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikayelh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4678528836,
"node_id": "LA_kwDOIPDwls8AAAABFtyvRA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | 4 | 2023-05-26T00:27:28 | 2023-07-15T02:10:52 | 2023-07-15T02:10:52 | NONE | null | # More end-to-end examples with LangChain 🚀
Hiya, I added more examples for chatbots and question answering
1. [JarvisBase](https://github.com/peterw/JarvisBase): An end-to-end Customer Support assistant that transcribes user voice, performs Question Answering over a scraped documentation base, & answers in natural language.
2. [PDF Analysis Slack Chatbot](https://github.com/hollaugo/slack-financial-analysis-chatbot): Build an end-to-end Slack chatbot that chats with multiple PDF files (financial analysis in this case).
3. [Question Answering over multiple PDFs](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/): An intro guide for building a chat with multiple PDFs solution.
# Who can review?
forgot to tag @hwchase17 - sorry!
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5274/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5274",
"html_url": "https://github.com/langchain-ai/langchain/pull/5274",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5274.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5274.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5273 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5273/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5273/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5273/events | https://github.com/langchain-ai/langchain/pull/5273 | 1,726,597,095 | PR_kwDOIPDwls5RYt4k | 5,273 | OpenAI lint | {
"login": "dev2049",
"id": 130488702,
"node_id": "U_kgDOB8cZfg",
"avatar_url": "https://avatars.githubusercontent.com/u/130488702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dev2049",
"html_url": "https://github.com/dev2049",
"followers_url": "https://api.github.com/users/dev2049/followers",
"following_url": "https://api.github.com/users/dev2049/following{/other_user}",
"gists_url": "https://api.github.com/users/dev2049/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dev2049/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dev2049/subscriptions",
"organizations_url": "https://api.github.com/users/dev2049/orgs",
"repos_url": "https://api.github.com/users/dev2049/repos",
"events_url": "https://api.github.com/users/dev2049/events{/privacy}",
"received_events_url": "https://api.github.com/users/dev2049/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-25T23:11:08 | 2023-05-25T23:20:08 | 2023-05-25T23:20:07 | CONTRIBUTOR | null | Causing lint issues if you have openai installed, annoying for local dev | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5273/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5273",
"html_url": "https://github.com/langchain-ai/langchain/pull/5273",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5273.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5273.patch",
"merged_at": "2023-05-25T23:20:07"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5272 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5272/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5272/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5272/events | https://github.com/langchain-ai/langchain/pull/5272 | 1,726,578,754 | PR_kwDOIPDwls5RYp6T | 5,272 | Fixed typo: 'ouput' to 'output' in all documentation | {
"login": "deepblue",
"id": 2792,
"node_id": "MDQ6VXNlcjI3OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deepblue",
"html_url": "https://github.com/deepblue",
"followers_url": "https://api.github.com/users/deepblue/followers",
"following_url": "https://api.github.com/users/deepblue/following{/other_user}",
"gists_url": "https://api.github.com/users/deepblue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deepblue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deepblue/subscriptions",
"organizations_url": "https://api.github.com/users/deepblue/orgs",
"repos_url": "https://api.github.com/users/deepblue/repos",
"events_url": "https://api.github.com/users/deepblue/events{/privacy}",
"received_events_url": "https://api.github.com/users/deepblue/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 1 | 2023-05-25T22:47:35 | 2023-05-26T02:18:32 | 2023-05-26T02:18:32 | CONTRIBUTOR | null | # Fixed typo: 'ouput' to 'output' in all documentation
In this instance, the typo 'ouput' was amended to 'output' in all occurrences within the documentation. There are no dependencies required for this change.
## Who can review?
@hwchase17
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5272/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5272",
"html_url": "https://github.com/langchain-ai/langchain/pull/5272",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5272.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5272.patch",
"merged_at": "2023-05-26T02:18:32"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5271 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5271/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5271/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5271/events | https://github.com/langchain-ai/langchain/issues/5271 | 1,726,558,628 | I_kwDOIPDwls5m6TGk | 5,271 | ZeroShotAgent fails with ShellTool due to quotes in llm output | {
"login": "bmcclanahan",
"id": 3519757,
"node_id": "MDQ6VXNlcjM1MTk3NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3519757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmcclanahan",
"html_url": "https://github.com/bmcclanahan",
"followers_url": "https://api.github.com/users/bmcclanahan/followers",
"following_url": "https://api.github.com/users/bmcclanahan/following{/other_user}",
"gists_url": "https://api.github.com/users/bmcclanahan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmcclanahan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmcclanahan/subscriptions",
"organizations_url": "https://api.github.com/users/bmcclanahan/orgs",
"repos_url": "https://api.github.com/users/bmcclanahan/repos",
"events_url": "https://api.github.com/users/bmcclanahan/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmcclanahan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-25T22:18:12 | 2023-10-08T16:06:56 | 2023-10-08T16:06:55 | NONE | null | https://github.com/hwchase17/langchain/blob/7652d2abb01208fd51115e34e18b066824e7d921/langchain/agents/mrkl/output_parser.py#L47
Due to the line above the `ShellTool` fails when using it with the `ZeroShotAgent`. In using `langchain.OpenAI` as the `llm` I encountered a scenario where ChatGPT provides a string surrounded by single quotes for `Action Input:`. This causes the ShellTool not to recognize the input command because it is surrounded by single quotes which aren't stripped (I get a command not found error). This could easily be fixed by stripping single quotes from `action_input`.
```
return AgentAction(action, action_input.strip(" ").strip('"').strip("'"), text)
``` | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5271/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5270 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5270/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5270/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5270/events | https://github.com/langchain-ai/langchain/issues/5270 | 1,726,507,623 | I_kwDOIPDwls5m6Gpn | 5,270 | use_query_checker for VertexAI fails | {
"login": "steven-levey-sp",
"id": 72816356,
"node_id": "MDQ6VXNlcjcyODE2MzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/72816356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steven-levey-sp",
"html_url": "https://github.com/steven-levey-sp",
"followers_url": "https://api.github.com/users/steven-levey-sp/followers",
"following_url": "https://api.github.com/users/steven-levey-sp/following{/other_user}",
"gists_url": "https://api.github.com/users/steven-levey-sp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steven-levey-sp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steven-levey-sp/subscriptions",
"organizations_url": "https://api.github.com/users/steven-levey-sp/orgs",
"repos_url": "https://api.github.com/users/steven-levey-sp/repos",
"events_url": "https://api.github.com/users/steven-levey-sp/events{/privacy}",
"received_events_url": "https://api.github.com/users/steven-levey-sp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-05-25T21:22:26 | 2023-10-05T16:09:44 | 2023-10-05T16:09:43 | NONE | null | ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
SQLAlchemy==2.0.15
duckdb==0.8.0
duckdb-engine==0.7.3
Running inside GCP Vertex AI Notebook (Jupyter Lab essentially jupyterlab==3.4.8)
python 3.7
### Who can help?
@Jflick58
@lkuligin
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create the Vertex AI LLM (using latest version of LangChain)
`from langchain.llms import VertexAI
palmllm = VertexAI(model_name='text-bison@001',
max_output_tokens=256,
temperature=0.2,
top_p=0.1,
top_k=40,
verbose=True)`
2. Setup the db engine for duckdb in this case
`engine = create_engine("duckdb:///dw.db")`
2. Then create the chain using SQLDatabaseChain (Note the use of use_query_checker=True)
`#Setup the DB
db = SQLDatabase(engine=engine,metadata=MetaData(bind=engine),include_tables=[table_name])
#Setup the chain
db_chain = SQLDatabaseChain.from_llm(palmllm,db,verbose=True,use_query_checker=True,prompt=PROMPT,return_intermediate_steps=True,top_k=3)`
4. Run a query against the chain (Notice the SQLQuery: The query is correct) (It is as if its trying to execute "The query is correct" as SQL"
`> Entering new SQLDatabaseChain chain...
How many countries are there
SQLQuery:The query is correct.`
This is the error returned:
`ProgrammingError: (duckdb.ParserException) Parser Error: syntax error at or near "The"
LINE 1: The query is correct.
^
[SQL: The query is correct.]
(Background on this error at: https://sqlalche.me/e/14/f405)`
IMPORTANT:
- If I remove the "use_query_checker=True" then everything works well.
- If I use the OpenAI LLM and dont change anything (except the LLM), then it works with the "use_query_checker=True" setting.
This relates to [#5049](https://github.com/hwchase17/langchain/pull/5049)
### Expected behavior
I believe the intention of that flag "use_query_checker=True" is to validate the SQL and allow the chain to recover from a simple syntax error. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5270/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5270/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5269 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5269/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5269/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5269/events | https://github.com/langchain-ai/langchain/issues/5269 | 1,726,486,276 | I_kwDOIPDwls5m6BcE | 5,269 | pandas dataframe agent generates correct Action Input, but returns incorrect result | {
"login": "ayush-1506",
"id": 49805996,
"node_id": "MDQ6VXNlcjQ5ODA1OTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/49805996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayush-1506",
"html_url": "https://github.com/ayush-1506",
"followers_url": "https://api.github.com/users/ayush-1506/followers",
"following_url": "https://api.github.com/users/ayush-1506/following{/other_user}",
"gists_url": "https://api.github.com/users/ayush-1506/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayush-1506/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayush-1506/subscriptions",
"organizations_url": "https://api.github.com/users/ayush-1506/orgs",
"repos_url": "https://api.github.com/users/ayush-1506/repos",
"events_url": "https://api.github.com/users/ayush-1506/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayush-1506/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 10 | 2023-05-25T21:03:00 | 2023-11-03T16:08:07 | 2023-11-03T16:08:06 | NONE | null | ### System Info
Langchain version: 0.0.180
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [x] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm running the example from docs: https://python.langchain.com/en/latest/modules/agents/toolkits/examples/pandas.html.
`agent.run("how many people are 28 years old?")`
gives:
```
> Entering new AgentExecutor chain...
Thought: I need to use the `df` dataframe to find how many people are 28 years old.
Action: python_repl_ast
Action Input: df['Age'] == 28
Observation: 0
Thought: There are no people 28 years old.
Final Answer: 0
```
In other cases, the Action Input the LLM calculates is correct, but the observation (result of applying this action on the dataframe) is incorrect. This makes me believe that the LLM isn't at fault here.
### Expected behavior
Should return 25. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5269/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5268 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5268/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5268/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5268/events | https://github.com/langchain-ai/langchain/pull/5268 | 1,726,479,097 | PR_kwDOIPDwls5RYUKY | 5,268 | Added pipline args to `HuggingFacePipeline.from_model_id` | {
"login": "solomspd",
"id": 40343437,
"node_id": "MDQ6VXNlcjQwMzQzNDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/40343437?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/solomspd",
"html_url": "https://github.com/solomspd",
"followers_url": "https://api.github.com/users/solomspd/followers",
"following_url": "https://api.github.com/users/solomspd/following{/other_user}",
"gists_url": "https://api.github.com/users/solomspd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/solomspd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/solomspd/subscriptions",
"organizations_url": "https://api.github.com/users/solomspd/orgs",
"repos_url": "https://api.github.com/users/solomspd/repos",
"events_url": "https://api.github.com/users/solomspd/events{/privacy}",
"received_events_url": "https://api.github.com/users/solomspd/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 0 | 2023-05-25T20:56:15 | 2023-05-26T00:54:52 | 2023-05-26T00:54:52 | CONTRIBUTOR | null | The current `HuggingFacePipeline.from_model_id` does not allow passing of pipeline arguments to the transformer pipeline.
This PR enables adding important pipeline parameters like setting `max_new_tokens` for example.
Previous to this PR it would be necessary to manually create the pipeline through huggingface transformers then handing it to langchain.
For example instead of this
```py
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
```
You can write this
```py
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2", task="text-generation", pipeline_kwargs={"max_new_tokens": 10}
)
```
## Who can review?
@hwchase17
@agola11
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5268/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5268",
"html_url": "https://github.com/langchain-ai/langchain/pull/5268",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5268.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5268.patch",
"merged_at": "2023-05-26T00:54:52"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5267 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5267/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5267/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5267/events | https://github.com/langchain-ai/langchain/pull/5267 | 1,726,448,140 | PR_kwDOIPDwls5RYNZs | 5,267 | Zep sdk version | {
"login": "dev2049",
"id": 130488702,
"node_id": "U_kgDOB8cZfg",
"avatar_url": "https://avatars.githubusercontent.com/u/130488702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dev2049",
"html_url": "https://github.com/dev2049",
"followers_url": "https://api.github.com/users/dev2049/followers",
"following_url": "https://api.github.com/users/dev2049/following{/other_user}",
"gists_url": "https://api.github.com/users/dev2049/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dev2049/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dev2049/subscriptions",
"organizations_url": "https://api.github.com/users/dev2049/orgs",
"repos_url": "https://api.github.com/users/dev2049/repos",
"events_url": "https://api.github.com/users/dev2049/events{/privacy}",
"received_events_url": "https://api.github.com/users/dev2049/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 1 | 2023-05-25T20:29:17 | 2023-05-25T20:42:11 | 2023-05-25T20:42:11 | CONTRIBUTOR | null | zep-python's sync methods no longer need an asyncio wrapper. This was causing issues with FastAPI deployment.
Zep also now supports putting and getting of arbitrary message metadata.
Bump zep-python version to v0.30
Remove nest-asyncio from Zep example notebooks.
Modify tests to include metadata. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5267/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5267",
"html_url": "https://github.com/langchain-ai/langchain/pull/5267",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5267.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5267.patch",
"merged_at": "2023-05-25T20:42:11"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5266 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5266/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5266/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5266/events | https://github.com/langchain-ai/langchain/issues/5266 | 1,726,411,036 | I_kwDOIPDwls5m5vEc | 5,266 | Issue: RetrievalQA -> ConversationalChatAgent -> AgentExecutor gives no response if document-related | {
"login": "ronasuti",
"id": 5325389,
"node_id": "MDQ6VXNlcjUzMjUzODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5325389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ronasuti",
"html_url": "https://github.com/ronasuti",
"followers_url": "https://api.github.com/users/ronasuti/followers",
"following_url": "https://api.github.com/users/ronasuti/following{/other_user}",
"gists_url": "https://api.github.com/users/ronasuti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ronasuti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ronasuti/subscriptions",
"organizations_url": "https://api.github.com/users/ronasuti/orgs",
"repos_url": "https://api.github.com/users/ronasuti/repos",
"events_url": "https://api.github.com/users/ronasuti/events{/privacy}",
"received_events_url": "https://api.github.com/users/ronasuti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 11 | 2023-05-25T19:58:01 | 2023-09-18T16:10:56 | 2023-09-18T16:10:55 | NONE | null | ### Issue you'd like to raise.
Hi everybody,
I'm working with an LLM setup inspired from @pelyhe 's implementation #4573 .
It uses a RetrievalQA that queries a persistent embedded ChromaDB, then feeds it into a ConversationalChatAgent and then an AgentExecutor.
Currently, this setup works for only basic situations which definitely have nothing to do with documents. Once I ask it something document relevant, it gives an empty response. I have a nagging suspicion I've simply wired things up incorrectly, but it's not clear how to fix it.
```
@st.cache_resource
def load_agent():
vectorstore = Chroma(persist_directory=CHROMA_DIR)
basic_prompt_template = """If the context is not relevant,
please answer the question by using your own knowledge about the topic.
###Context:
{context}
###Human:
{question}
###Assistant:
"""
prompt = PromptTemplate(
template=basic_prompt_template, input_variables=["context", "question"]
)
system_msg = "You are a helpful assistant."
chain_type_kwargs = {"prompt": prompt}
# Time to initialize the LLM, as late as possible so everything not requiring the LLM instance to fail fast
llm = GPT4All(
model=MODEL,
verbose=True,
)
# Initialise QA chain for document-relevant queries
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
chain_type_kwargs=chain_type_kwargs,
)
tools = [
Tool(
name="Document tool",
func=qa.run,
description="useful for when you need to answer questions from documents.",
),
]
agent = ConversationalChatAgent.from_llm_and_tools(
llm=llm, tools=tools, system_message=system_msg, verbose=True
)
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
memory=ConversationBufferMemory(
memory_key="chat_history", return_messages=True
),
)
agent = load_agent()
###########################
# Streamlit UI operation. #
###########################
if "generated" not in st.session_state:
st.session_state["generated"] = []
if "past" not in st.session_state:
st.session_state["past"] = []
def get_text():
input_text = st.text_input(label="", key="question")
return input_text
user_input = get_text()
if user_input:
try:
output = agent.run(input=user_input)
except ValueError as e:
output = str(e)
if not output.startswith("Could not parse LLM output: "):
raise Exception(output)
output = output.removeprefix("Could not parse LLM output: ").removesuffix("`")
st.session_state.past.append(user_input)
st.session_state.generated.append(output)
if st.session_state["generated"]:
for i in range(len(st.session_state["generated"]) - 1, -1, -1):
message(st.session_state["generated"][i], key=str(i))
message(st.session_state["past"][i], is_user=True, key=str(i) + "_user")
```
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5266/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5265 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5265/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5265/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5265/events | https://github.com/langchain-ai/langchain/pull/5265 | 1,726,405,860 | PR_kwDOIPDwls5RYEMo | 5,265 | Fixed regression in JoplinLoader's get note url | {
"login": "alondmnt",
"id": 17462125,
"node_id": "MDQ6VXNlcjE3NDYyMTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/17462125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alondmnt",
"html_url": "https://github.com/alondmnt",
"followers_url": "https://api.github.com/users/alondmnt/followers",
"following_url": "https://api.github.com/users/alondmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/alondmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alondmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alondmnt/subscriptions",
"organizations_url": "https://api.github.com/users/alondmnt/orgs",
"repos_url": "https://api.github.com/users/alondmnt/repos",
"events_url": "https://api.github.com/users/alondmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alondmnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-25T19:53:47 | 2023-05-25T20:10:22 | 2023-05-25T20:10:11 | CONTRIBUTOR | null | Fixes a regression in JoplinLoader that was introduced during the code review (bad `page` wildcard in _get_note_url).
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
@dev2049
@leo-gan
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5265/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5265",
"html_url": "https://github.com/langchain-ai/langchain/pull/5265",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5265.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5265.patch",
"merged_at": "2023-05-25T20:10:11"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5264 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5264/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5264/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5264/events | https://github.com/langchain-ai/langchain/issues/5264 | 1,726,337,382 | I_kwDOIPDwls5m5dFm | 5,264 | UnstructuredMarkdownLoader resulting in `zipfile.BadZipFile: File is not a zip file` | {
"login": "abhishekbhakat",
"id": 43564549,
"node_id": "MDQ6VXNlcjQzNTY0NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/43564549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekbhakat",
"html_url": "https://github.com/abhishekbhakat",
"followers_url": "https://api.github.com/users/abhishekbhakat/followers",
"following_url": "https://api.github.com/users/abhishekbhakat/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekbhakat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekbhakat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekbhakat/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekbhakat/orgs",
"repos_url": "https://api.github.com/users/abhishekbhakat/repos",
"events_url": "https://api.github.com/users/abhishekbhakat/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekbhakat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 10 | 2023-05-25T18:59:18 | 2023-11-29T17:55:26 | 2023-11-29T17:55:16 | NONE | null | ### System Info
```
langchain==0.0.180
Python 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] on win32
Windows 11
```
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Python Code
```
from langchain.document_loaders import UnstructuredMarkdownLoader
markdown_path = r"Pyspy.md"
loader = UnstructuredMarkdownLoader(markdown_path)
data = loader.load()
```
Markdown file `Pyspy.md`
```
```
.pip/bin/py-spy top -p 70
```
```
### Expected behavior
It should result in List[Document] in data | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5264/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5263 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5263/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5263/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5263/events | https://github.com/langchain-ai/langchain/pull/5263 | 1,726,334,702 | PR_kwDOIPDwls5RX0k1 | 5,263 | Zep SDK Version Update | {
"login": "danielchalef",
"id": 131175,
"node_id": "MDQ6VXNlcjEzMTE3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/131175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielchalef",
"html_url": "https://github.com/danielchalef",
"followers_url": "https://api.github.com/users/danielchalef/followers",
"following_url": "https://api.github.com/users/danielchalef/following{/other_user}",
"gists_url": "https://api.github.com/users/danielchalef/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielchalef/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielchalef/subscriptions",
"organizations_url": "https://api.github.com/users/danielchalef/orgs",
"repos_url": "https://api.github.com/users/danielchalef/repos",
"events_url": "https://api.github.com/users/danielchalef/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielchalef/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5509644103,
"node_id": "LA_kwDOIPDwls8AAAABSGZ_Rw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/maintainer-to-merge",
"name": "maintainer-to-merge",
"color": "B1D5ED",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 2 | 2023-05-25T18:57:29 | 2023-06-25T23:16:31 | 2023-05-25T20:42:20 | CONTRIBUTOR | null | # Zep SDK Version Update
zep-python's sync methods no longer need an asyncio wrapper. This was causing issues with FastAPI deployment.
Zep also now supports putting and getting of arbitrary message metadata.
- Bump zep-python version to v0.30
- Remove `nest-asyncio` from Zep example notebooks.
- Modify tests to include metadata.
- @dev2049
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5263/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5263",
"html_url": "https://github.com/langchain-ai/langchain/pull/5263",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5263.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5263.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5262 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5262/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5262/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5262/events | https://github.com/langchain-ai/langchain/issues/5262 | 1,726,293,593 | I_kwDOIPDwls5m5SZZ | 5,262 | Assertion Error when using VertexAIEmbeddings with faiss vectorstore | {
"login": "bhanu-pappala",
"id": 67595337,
"node_id": "MDQ6VXNlcjY3NTk1MzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/67595337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhanu-pappala",
"html_url": "https://github.com/bhanu-pappala",
"followers_url": "https://api.github.com/users/bhanu-pappala/followers",
"following_url": "https://api.github.com/users/bhanu-pappala/following{/other_user}",
"gists_url": "https://api.github.com/users/bhanu-pappala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhanu-pappala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhanu-pappala/subscriptions",
"organizations_url": "https://api.github.com/users/bhanu-pappala/orgs",
"repos_url": "https://api.github.com/users/bhanu-pappala/repos",
"events_url": "https://api.github.com/users/bhanu-pappala/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhanu-pappala/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-05-25T18:28:41 | 2023-12-20T19:12:22 | 2023-09-14T16:08:48 | NONE | null | ```
~\Anaconda3\lib\site-packages\langchain\memory\vectorstore.py in save_context(self, inputs, outputs)
67 """Save context from this conversation to buffer."""
68 documents = self._form_documents(inputs, outputs)
---> 69 self.retriever.add_documents(documents)
70
71 def clear(self) -> None:
~\Anaconda3\lib\site-packages\langchain\vectorstores\base.py in add_documents(self, documents, **kwargs)
413 def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
414 """Add documents to vectorstore."""
--> 415 return self.vectorstore.add_documents(documents, **kwargs)
416
417 async def aadd_documents(
~\Anaconda3\lib\site-packages\langchain\vectorstores\base.py in add_documents(self, documents, **kwargs)
60 texts = [doc.page_content for doc in documents]
61 metadatas = [doc.metadata for doc in documents]
---> 62 return self.add_texts(texts, metadatas, **kwargs)
63
64 async def aadd_documents(
~\Anaconda3\lib\site-packages\langchain\vectorstores\faiss.py in add_texts(self, texts, metadatas, ids, **kwargs)
150 # Embed and create the documents.
151 embeddings = [self.embedding_function(text) for text in texts]
--> 152 return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)
153
154 def add_embeddings(
~\Anaconda3\lib\site-packages\langchain\vectorstores\faiss.py in __add(self, texts, embeddings, metadatas, ids, **kwargs)
117 if self._normalize_L2:
118 faiss.normalize_L2(vector)
--> 119 self.index.add(vector)
120 # Get list of index, id, and docs.
121 full_info = [(starting_len + i, ids[i], doc) for i, doc in enumerate(documents)]
~\Anaconda3\lib\site-packages\faiss\class_wrappers.py in replacement_add(self, x)
226
227 n, d = x.shape
--> 228 assert d == self.d
229 x = np.ascontiguousarray(x, dtype='float32')
230 self.add_c(n, swig_ptr(x))
AssertionError:
``` | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5262/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5261 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5261/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5261/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5261/events | https://github.com/langchain-ai/langchain/pull/5261 | 1,726,268,544 | PR_kwDOIPDwls5RXmKj | 5,261 | Support bigquery dialect - SQL | {
"login": "HassanOuda",
"id": 2914618,
"node_id": "MDQ6VXNlcjI5MTQ2MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2914618?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HassanOuda",
"html_url": "https://github.com/HassanOuda",
"followers_url": "https://api.github.com/users/HassanOuda/followers",
"following_url": "https://api.github.com/users/HassanOuda/following{/other_user}",
"gists_url": "https://api.github.com/users/HassanOuda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HassanOuda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HassanOuda/subscriptions",
"organizations_url": "https://api.github.com/users/HassanOuda/orgs",
"repos_url": "https://api.github.com/users/HassanOuda/repos",
"events_url": "https://api.github.com/users/HassanOuda/events{/privacy}",
"received_events_url": "https://api.github.com/users/HassanOuda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 0 | 2023-05-25T18:09:09 | 2023-05-26T01:19:18 | 2023-05-26T01:19:18 | CONTRIBUTOR | null | # Your PR Title (What it does)
Adding an if statement to deal with bigquery sql dialect. When I use bigquery dialect before, it failed while using SET search_path TO. So added a condition to set dataset as the schema parameter which is equivalent to SET search_path TO . I have tested and it works.
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
@dev2049
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5261/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5261",
"html_url": "https://github.com/langchain-ai/langchain/pull/5261",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5261.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5261.patch",
"merged_at": "2023-05-26T01:19:18"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5260 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5260/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5260/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5260/events | https://github.com/langchain-ai/langchain/pull/5260 | 1,726,213,799 | PR_kwDOIPDwls5RXaOj | 5,260 | Build an abstract dialogue model using classes and methods to represent different dialogue elements | {
"login": "glide-the",
"id": 16206043,
"node_id": "MDQ6VXNlcjE2MjA2MDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/16206043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glide-the",
"html_url": "https://github.com/glide-the",
"followers_url": "https://api.github.com/users/glide-the/followers",
"following_url": "https://api.github.com/users/glide-the/following{/other_user}",
"gists_url": "https://api.github.com/users/glide-the/gists{/gist_id}",
"starred_url": "https://api.github.com/users/glide-the/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glide-the/subscriptions",
"organizations_url": "https://api.github.com/users/glide-the/orgs",
"repos_url": "https://api.github.com/users/glide-the/repos",
"events_url": "https://api.github.com/users/glide-the/events{/privacy}",
"received_events_url": "https://api.github.com/users/glide-the/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5496111774,
"node_id": "LA_kwDOIPDwls8AAAABR5gCng",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/needs%20work",
"name": "needs work",
"color": "F9D0C4",
"default": false,
"description": "PRs that need more work"
}
] | closed | false | null | [] | null | 0 | 2023-05-25T17:25:47 | 2023-06-07T17:05:57 | 2023-06-07T17:05:57 | NONE | null | # Build an abstract dialogue model using classes and methods to represent different dialogue elements
Fixes # None
## Before submitting
If you want to review, please refer to the quick start example provided in langchain/chains/dialogue_answering/__main__.py. You may need to set the openaikey and the following startup parameters: --dialogue-path: the location of the dialogue file, --embedding-model: the HuggingFaceEmbeddings model to use (defaults to GanymedeNil/text2vec-large-chinese) if not specified.
Regarding the format of the dialogue file, please refer to the following information:
```text
sun:
Has the offline model been run?
glide-the:
Yes, it has been run, but the results are not very satisfactory.
glide-the:
It lacks chat intelligence and falls far behind in terms of logic and reasoning.
sun:
Are you available for voice chat?
glide-the:
I'm considering using this offline model: https://huggingface.co/chat
glide-the:
voice chat okay.
glide-the:
You can take a look at the dev_agent branch of the langchain-chatglm project.
glide-the:
There's a dialogue model question-answering example under the agent.
sun:
Alright.
glide-the:
The specified chat record file is exported from WeChat.
```
## Who can review?
Including lorader and agent applications
- @eyurtsev
- @vowelparrot
- @dev2049
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5260/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5260",
"html_url": "https://github.com/langchain-ai/langchain/pull/5260",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5260.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5260.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5259 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5259/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5259/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5259/events | https://github.com/langchain-ai/langchain/pull/5259 | 1,726,206,459 | PR_kwDOIPDwls5RXYpI | 5,259 | feat: support for shopping search in SerpApi | {
"login": "aymenfurter",
"id": 20464460,
"node_id": "MDQ6VXNlcjIwNDY0NDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/20464460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aymenfurter",
"html_url": "https://github.com/aymenfurter",
"followers_url": "https://api.github.com/users/aymenfurter/followers",
"following_url": "https://api.github.com/users/aymenfurter/following{/other_user}",
"gists_url": "https://api.github.com/users/aymenfurter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aymenfurter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aymenfurter/subscriptions",
"organizations_url": "https://api.github.com/users/aymenfurter/orgs",
"repos_url": "https://api.github.com/users/aymenfurter/repos",
"events_url": "https://api.github.com/users/aymenfurter/events{/privacy}",
"received_events_url": "https://api.github.com/users/aymenfurter/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-25T17:20:08 | 2023-05-28T22:23:26 | 2023-05-28T04:20:25 | CONTRIBUTOR | null | # Support for shopping search in SerpApi
## Who can review?
@vowelparrot | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5259/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5259",
"html_url": "https://github.com/langchain-ai/langchain/pull/5259",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5259.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5259.patch",
"merged_at": "2023-05-28T04:20:25"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5258 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5258/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5258/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5258/events | https://github.com/langchain-ai/langchain/issues/5258 | 1,726,151,388 | I_kwDOIPDwls5m4vrc | 5,258 | Internal error encountered when using VertexAI in ConversationalRetrievalChain | {
"login": "bhanu-pappala",
"id": 67595337,
"node_id": "MDQ6VXNlcjY3NTk1MzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/67595337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhanu-pappala",
"html_url": "https://github.com/bhanu-pappala",
"followers_url": "https://api.github.com/users/bhanu-pappala/followers",
"following_url": "https://api.github.com/users/bhanu-pappala/following{/other_user}",
"gists_url": "https://api.github.com/users/bhanu-pappala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhanu-pappala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhanu-pappala/subscriptions",
"organizations_url": "https://api.github.com/users/bhanu-pappala/orgs",
"repos_url": "https://api.github.com/users/bhanu-pappala/repos",
"events_url": "https://api.github.com/users/bhanu-pappala/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhanu-pappala/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-25T16:38:50 | 2023-09-10T16:12:07 | 2023-09-10T16:12:06 | NONE | null | ```
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in generate_prompt(self, prompts, stop, callbacks)
141 ) -> LLMResult:
142 prompt_messages = [p.to_messages() for p in prompts]
--> 143 return self.generate(prompt_messages, stop=stop, callbacks=callbacks)
144
145 async def agenerate_prompt(
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in generate(self, messages, stop, callbacks)
89 except (KeyboardInterrupt, Exception) as e:
90 run_manager.on_llm_error(e)
---> 91 raise e
92 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
93 generations = [res.generations for res in results]
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in generate(self, messages, stop, callbacks)
81 )
82 try:
---> 83 results = [
84 self._generate(m, stop=stop, run_manager=run_manager)
85 if new_arg_supported
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in <listcomp>(.0)
82 try:
83 results = [
---> 84 self._generate(m, stop=stop, run_manager=run_manager)
85 if new_arg_supported
86 else self._generate(m, stop=stop)
~\Anaconda3\lib\site-packages\langchain\chat_models\vertexai.py in _generate(self, messages, stop, run_manager)
123 for pair in history.history:
124 chat._history.append((pair.question.content, pair.answer.content))
--> 125 response = chat.send_message(question.content)
126 text = self._enforce_stop_words(response.text, stop)
127 return ChatResult(generations=[ChatGeneration(message=AIMessage(content=text))])
~\Anaconda3\lib\site-packages\vertexai\language_models\_language_models.py in send_message(self, message, max_output_tokens, temperature, top_k, top_p)
676 ]
677
--> 678 prediction_response = self._model._endpoint.predict(
679 instances=[prediction_instance],
680 parameters=prediction_parameters,
~\Anaconda3\lib\site-packages\google\cloud\aiplatform\models.py in predict(self, instances, parameters, timeout, use_raw_predict)
1544 )
1545 else:
-> 1546 prediction_response = self._prediction_client.predict(
1547 endpoint=self._gca_resource.name,
1548 instances=instances,
~\Anaconda3\lib\site-packages\google\cloud\aiplatform_v1\services\prediction_service\client.py in predict(self, request, endpoint, instances, parameters, retry, timeout, metadata)
600
601 # Send the request.
--> 602 response = rpc(
603 request,
604 retry=retry,
~\Anaconda3\lib\site-packages\google\api_core\gapic_v1\method.py in __call__(self, timeout, retry, *args, **kwargs)
111 kwargs["metadata"] = metadata
112
--> 113 return wrapped_func(*args, **kwargs)
114
115
~\Anaconda3\lib\site-packages\google\api_core\grpc_helpers.py in error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
75
76 return error_remapped_callable
InternalServerError: 500 Internal error encountered.
``` | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5258/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5257 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5257/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5257/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5257/events | https://github.com/langchain-ai/langchain/issues/5257 | 1,726,136,467 | I_kwDOIPDwls5m4sCT | 5,257 | Github integration | {
"login": "mudler",
"id": 2420543,
"node_id": "MDQ6VXNlcjI0MjA1NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2420543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mudler",
"html_url": "https://github.com/mudler",
"followers_url": "https://api.github.com/users/mudler/followers",
"following_url": "https://api.github.com/users/mudler/following{/other_user}",
"gists_url": "https://api.github.com/users/mudler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mudler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mudler/subscriptions",
"organizations_url": "https://api.github.com/users/mudler/orgs",
"repos_url": "https://api.github.com/users/mudler/repos",
"events_url": "https://api.github.com/users/mudler/events{/privacy}",
"received_events_url": "https://api.github.com/users/mudler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4678528817,
"node_id": "LA_kwDOIPDwls8AAAABFtyvMQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 11 | 2023-05-25T16:27:21 | 2023-11-29T21:21:01 | 2023-05-30T03:11:23 | CONTRIBUTOR | null | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5257/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/langchain-ai/langchain/issues/5257/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5256 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5256/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5256/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5256/events | https://github.com/langchain-ai/langchain/issues/5256 | 1,726,133,919 | I_kwDOIPDwls5m4raf | 5,256 | Add integration for LocalAI | {
"login": "mudler",
"id": 2420543,
"node_id": "MDQ6VXNlcjI0MjA1NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2420543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mudler",
"html_url": "https://github.com/mudler",
"followers_url": "https://api.github.com/users/mudler/followers",
"following_url": "https://api.github.com/users/mudler/following{/other_user}",
"gists_url": "https://api.github.com/users/mudler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mudler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mudler/subscriptions",
"organizations_url": "https://api.github.com/users/mudler/orgs",
"repos_url": "https://api.github.com/users/mudler/repos",
"events_url": "https://api.github.com/users/mudler/events{/privacy}",
"received_events_url": "https://api.github.com/users/mudler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4678528817,
"node_id": "LA_kwDOIPDwls8AAAABFtyvMQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 4899415699,
"node_id": "LA_kwDOIPDwls8AAAABJAcmkw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/llms",
"name": "llms",
"color": "7CDBB2",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 7 | 2023-05-25T16:25:18 | 2024-01-05T16:15:00 | null | CONTRIBUTOR | null | ### Feature request
Integration with LocalAI and with its extended endpoints to download models from the gallery.
### Motivation
LocalAI is a self-hosted OpenAI drop-in replacement with support for multiple model families: https://github.com/go-skynet/LocalAI
### Your contribution
Not a python guru, so might take few cycles away here. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5256/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5256/timeline | null | reopened | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5255 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5255/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5255/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5255/events | https://github.com/langchain-ai/langchain/issues/5255 | 1,726,121,249 | I_kwDOIPDwls5m4oUh | 5,255 | Inconsistent documentation for langchain.chains.FlareChain | {
"login": "james-310110",
"id": 36966103,
"node_id": "MDQ6VXNlcjM2OTY2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/36966103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/james-310110",
"html_url": "https://github.com/james-310110",
"followers_url": "https://api.github.com/users/james-310110/followers",
"following_url": "https://api.github.com/users/james-310110/following{/other_user}",
"gists_url": "https://api.github.com/users/james-310110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/james-310110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/james-310110/subscriptions",
"organizations_url": "https://api.github.com/users/james-310110/orgs",
"repos_url": "https://api.github.com/users/james-310110/repos",
"events_url": "https://api.github.com/users/james-310110/events{/privacy}",
"received_events_url": "https://api.github.com/users/james-310110/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4678528810,
"node_id": "LA_kwDOIPDwls8AAAABFtyvKg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 2 | 2023-05-25T16:15:17 | 2023-09-10T16:12:14 | 2023-09-10T16:12:13 | NONE | null | ### Question
Will there be future updates where we are allowed to customize answer_gen_llm when using FlareChain?
### Context
In the [documentation](https://python.langchain.com/en/latest/modules/chains/examples/flare.html) it says that:
In order to set up this chain, we will need three things:
- An LLM to generate the answer
- An LLM to generate hypothetical questions to use in retrieval
- A retriever to use to look up answers for
However, the example code only allows specification for the question_gen_llm, not the answer_gen_llm.
After referencing the [code](https://github.com/hwchase17/langchain/blob/9c0cb90997db9eb2e2a736df458d39fd7bec8ffb/langchain/chains/flare/base.py) for FlareChain, it seems that the answer_gen_llm is initialized as `OpenAI(max_tokens=32, model_kwargs={"logprobs": 1}, temperature=0)`, which default to `"text-davinci-003"` as no model_name is specified. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5255/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5254 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5254/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5254/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5254/events | https://github.com/langchain-ai/langchain/issues/5254 | 1,726,041,996 | I_kwDOIPDwls5m4U-M | 5,254 | Issue: <Streaming mode not work for Sequential Chains> | {
"login": "SaaS1973",
"id": 122426952,
"node_id": "U_kgDOB0wWSA",
"avatar_url": "https://avatars.githubusercontent.com/u/122426952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaaS1973",
"html_url": "https://github.com/SaaS1973",
"followers_url": "https://api.github.com/users/SaaS1973/followers",
"following_url": "https://api.github.com/users/SaaS1973/following{/other_user}",
"gists_url": "https://api.github.com/users/SaaS1973/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaaS1973/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaaS1973/subscriptions",
"organizations_url": "https://api.github.com/users/SaaS1973/orgs",
"repos_url": "https://api.github.com/users/SaaS1973/repos",
"events_url": "https://api.github.com/users/SaaS1973/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaaS1973/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-25T15:25:01 | 2023-09-10T16:12:18 | 2023-09-10T16:12:17 | NONE | null | ### Issue you'd like to raise.
hi, team:
I created Chain_A and Chain_B and set streaming=True for both of them.
overall_chain = SequentialChain(
chains=[chain_A, chain_B],
input_variables=["era", "title"],
output_variables=["synopsis", "review"],
verbose=True)
However, the streaming does not work.
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5254/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5252 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5252/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5252/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5252/events | https://github.com/langchain-ai/langchain/issues/5252 | 1,725,950,539 | I_kwDOIPDwls5m3-pL | 5,252 | how to monitoring the new files after directory loader class used | {
"login": "jamsnrihk",
"id": 134043330,
"node_id": "U_kgDOB_1Wwg",
"avatar_url": "https://avatars.githubusercontent.com/u/134043330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamsnrihk",
"html_url": "https://github.com/jamsnrihk",
"followers_url": "https://api.github.com/users/jamsnrihk/followers",
"following_url": "https://api.github.com/users/jamsnrihk/following{/other_user}",
"gists_url": "https://api.github.com/users/jamsnrihk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamsnrihk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamsnrihk/subscriptions",
"organizations_url": "https://api.github.com/users/jamsnrihk/orgs",
"repos_url": "https://api.github.com/users/jamsnrihk/repos",
"events_url": "https://api.github.com/users/jamsnrihk/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamsnrihk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-25T14:33:02 | 2023-09-14T16:09:01 | 2023-09-14T16:09:00 | NONE | null | ### Feature request
I am using langchain + openai api to create a chatbot for private data, i can use langchain directory loader class to load files from a directory, but if any new files added to that directory, how to automatically load it?
### Motivation
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html
### Your contribution
if can solve the problem, it will be good for company to use it for internal knowdege base share. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5252/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5250 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5250/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5250/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5250/events | https://github.com/langchain-ai/langchain/pull/5250 | 1,725,907,294 | PR_kwDOIPDwls5RWX8K | 5,250 | Docs link custom agent page in getting started | {
"login": "JanilsWoerst",
"id": 109829076,
"node_id": "U_kgDOBovb1A",
"avatar_url": "https://avatars.githubusercontent.com/u/109829076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JanilsWoerst",
"html_url": "https://github.com/JanilsWoerst",
"followers_url": "https://api.github.com/users/JanilsWoerst/followers",
"following_url": "https://api.github.com/users/JanilsWoerst/following{/other_user}",
"gists_url": "https://api.github.com/users/JanilsWoerst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JanilsWoerst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JanilsWoerst/subscriptions",
"organizations_url": "https://api.github.com/users/JanilsWoerst/orgs",
"repos_url": "https://api.github.com/users/JanilsWoerst/repos",
"events_url": "https://api.github.com/users/JanilsWoerst/events{/privacy}",
"received_events_url": "https://api.github.com/users/JanilsWoerst/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 1 | 2023-05-25T14:08:43 | 2023-05-25T20:11:31 | 2023-05-25T20:11:30 | CONTRIBUTOR | null | # Docs: link custom agent page in getting started
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5250/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5250",
"html_url": "https://github.com/langchain-ai/langchain/pull/5250",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5250.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5250.patch",
"merged_at": "2023-05-25T20:11:30"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5249 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5249/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5249/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5249/events | https://github.com/langchain-ai/langchain/issues/5249 | 1,725,901,643 | I_kwDOIPDwls5m3ytL | 5,249 | Issue: Add topics to the GitHub repos | {
"login": "Decryptu",
"id": 90518536,
"node_id": "MDQ6VXNlcjkwNTE4NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/90518536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Decryptu",
"html_url": "https://github.com/Decryptu",
"followers_url": "https://api.github.com/users/Decryptu/followers",
"following_url": "https://api.github.com/users/Decryptu/following{/other_user}",
"gists_url": "https://api.github.com/users/Decryptu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Decryptu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Decryptu/subscriptions",
"organizations_url": "https://api.github.com/users/Decryptu/orgs",
"repos_url": "https://api.github.com/users/Decryptu/repos",
"events_url": "https://api.github.com/users/Decryptu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Decryptu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-05-25T14:05:59 | 2023-12-09T16:06:41 | 2023-12-09T16:06:41 | NONE | null | ### Issue you'd like to raise.
Hello, would it be possible to add the topics (tags) to the repositories, it would be easier to find and organize them afterward. And its also usefull for external tools that are fetching github API to track repos ! Here is an example from HuggingFace :
.
<img width="538" alt="Capture d’écran 2023-05-25 à 15 58 11" src="https://github.com/hwchase17/langchain/assets/90518536/8a0029ad-6c44-426b-bc9d-2b01fcad46a7">
.
And here is a more specific screenshot in case I'm using the wrong words (sry not english) :
.
<img width="1440" alt="Capture d’écran 2023-05-25 à 16 03 40" src="https://github.com/hwchase17/langchain/assets/90518536/5aa4574d-1ae4-4bca-8ad5-044f3ce4a3cf">
### Suggestion:
I think you already know how, clicking the button on the repo page, then in about > topics, adding stuff like "python" "ai" "artificial intelligence" etc... thank you ! 😃 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5249/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5249/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5248 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5248/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5248/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5248/events | https://github.com/langchain-ai/langchain/pull/5248 | 1,725,879,506 | PR_kwDOIPDwls5RWRzw | 5,248 | bump 180 | {
"login": "dev2049",
"id": 130488702,
"node_id": "U_kgDOB8cZfg",
"avatar_url": "https://avatars.githubusercontent.com/u/130488702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dev2049",
"html_url": "https://github.com/dev2049",
"followers_url": "https://api.github.com/users/dev2049/followers",
"following_url": "https://api.github.com/users/dev2049/following{/other_user}",
"gists_url": "https://api.github.com/users/dev2049/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dev2049/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dev2049/subscriptions",
"organizations_url": "https://api.github.com/users/dev2049/orgs",
"repos_url": "https://api.github.com/users/dev2049/repos",
"events_url": "https://api.github.com/users/dev2049/events{/privacy}",
"received_events_url": "https://api.github.com/users/dev2049/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5010622926,
"node_id": "LA_kwDOIPDwls8AAAABKqgJzg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/release",
"name": "release",
"color": "07D4BE",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 1 | 2023-05-25T13:54:23 | 2023-05-25T14:09:51 | 2023-05-25T14:09:50 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5248/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5248",
"html_url": "https://github.com/langchain-ai/langchain/pull/5248",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5248.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5248.patch",
"merged_at": "2023-05-25T14:09:50"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5247 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5247/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5247/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5247/events | https://github.com/langchain-ai/langchain/pull/5247 | 1,725,870,922 | PR_kwDOIPDwls5RWP9p | 5,247 | Weaviate: Add QnA with sources example | {
"login": "hsm207",
"id": 2398765,
"node_id": "MDQ6VXNlcjIzOTg3NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2398765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hsm207",
"html_url": "https://github.com/hsm207",
"followers_url": "https://api.github.com/users/hsm207/followers",
"following_url": "https://api.github.com/users/hsm207/following{/other_user}",
"gists_url": "https://api.github.com/users/hsm207/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hsm207/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsm207/subscriptions",
"organizations_url": "https://api.github.com/users/hsm207/orgs",
"repos_url": "https://api.github.com/users/hsm207/repos",
"events_url": "https://api.github.com/users/hsm207/events{/privacy}",
"received_events_url": "https://api.github.com/users/hsm207/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-25T13:49:21 | 2023-05-25T13:59:12 | 2023-05-25T13:58:34 | CONTRIBUTOR | null | # Add QnA with sources example
<!--
Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution.
Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change.
After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost.
-->
<!-- Remove if not applicable -->
Fixes: see https://stackoverflow.com/questions/76207160/langchain-doesnt-work-with-weaviate-vector-database-getting-valueerror/76210017#76210017
## Before submitting
<!-- If you're adding a new integration, include an integration test and an example notebook showing its use! -->
## Who can review?
Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @vowelparrot
VectorStores / Retrievers / Memory
- @dev2049
-->
@dev2049 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5247/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5247",
"html_url": "https://github.com/langchain-ai/langchain/pull/5247",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5247.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5247.patch",
"merged_at": "2023-05-25T13:58:34"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5246 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5246/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5246/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5246/events | https://github.com/langchain-ai/langchain/pull/5246 | 1,725,841,983 | PR_kwDOIPDwls5RWJyu | 5,246 | Added the option of specifying a proxy for the OpenAI API | {
"login": "ymaurer",
"id": 14054229,
"node_id": "MDQ6VXNlcjE0MDU0MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/14054229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ymaurer",
"html_url": "https://github.com/ymaurer",
"followers_url": "https://api.github.com/users/ymaurer/followers",
"following_url": "https://api.github.com/users/ymaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/ymaurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ymaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ymaurer/subscriptions",
"organizations_url": "https://api.github.com/users/ymaurer/orgs",
"repos_url": "https://api.github.com/users/ymaurer/repos",
"events_url": "https://api.github.com/users/ymaurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/ymaurer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 0 | 2023-05-25T13:33:17 | 2023-05-25T16:50:26 | 2023-05-25T16:50:26 | CONTRIBUTOR | null | # Added the option of specifying a proxy for the OpenAI API
Fixes #5243
It affects the OpenAI Models
- @hwchase17
- @agola11 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5246/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5246/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5246",
"html_url": "https://github.com/langchain-ai/langchain/pull/5246",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5246.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5246.patch",
"merged_at": "2023-05-25T16:50:26"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5245 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5245/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5245/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5245/events | https://github.com/langchain-ai/langchain/issues/5245 | 1,725,833,687 | I_kwDOIPDwls5m3iHX | 5,245 | Issue: import 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint failing | {
"login": "shridhars",
"id": 3861617,
"node_id": "MDQ6VXNlcjM4NjE2MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3861617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shridhars",
"html_url": "https://github.com/shridhars",
"followers_url": "https://api.github.com/users/shridhars/followers",
"following_url": "https://api.github.com/users/shridhars/following{/other_user}",
"gists_url": "https://api.github.com/users/shridhars/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shridhars/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shridhars/subscriptions",
"organizations_url": "https://api.github.com/users/shridhars/orgs",
"repos_url": "https://api.github.com/users/shridhars/repos",
"events_url": "https://api.github.com/users/shridhars/events{/privacy}",
"received_events_url": "https://api.github.com/users/shridhars/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-25T13:28:29 | 2023-09-14T16:09:07 | 2023-09-14T16:09:06 | NONE | null | ### Issue you'd like to raise.
`!pip3 install langchain==0.0.179 boto3`
After installing langchain using the above command and trying to run the example mentioned in
[](https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html)
Getting the below error.
`ImportError:` cannot import name 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint' `(/opt/conda/lib/python3.10/site-packages/langchain/llms/sagemaker_endpoint.py)`
Am I missing something
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5245/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5244 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5244/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5244/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5244/events | https://github.com/langchain-ai/langchain/pull/5244 | 1,725,828,732 | PR_kwDOIPDwls5RWG5- | 5,244 | Fix typo in docstring of RetryWithErrorOutputParser | {
"login": "mwinterde",
"id": 60154509,
"node_id": "MDQ6VXNlcjYwMTU0NTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/60154509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mwinterde",
"html_url": "https://github.com/mwinterde",
"followers_url": "https://api.github.com/users/mwinterde/followers",
"following_url": "https://api.github.com/users/mwinterde/following{/other_user}",
"gists_url": "https://api.github.com/users/mwinterde/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mwinterde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mwinterde/subscriptions",
"organizations_url": "https://api.github.com/users/mwinterde/orgs",
"repos_url": "https://api.github.com/users/mwinterde/repos",
"events_url": "https://api.github.com/users/mwinterde/events{/privacy}",
"received_events_url": "https://api.github.com/users/mwinterde/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-25T13:25:30 | 2023-05-25T13:59:31 | 2023-05-25T13:59:31 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5244/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5244",
"html_url": "https://github.com/langchain-ai/langchain/pull/5244",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5244.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5244.patch",
"merged_at": "2023-05-25T13:59:31"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5243 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5243/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5243/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5243/events | https://github.com/langchain-ai/langchain/issues/5243 | 1,725,784,636 | I_kwDOIPDwls5m3WI8 | 5,243 | Add possibility to set a proxy for openai API access | {
"login": "ymaurer",
"id": 14054229,
"node_id": "MDQ6VXNlcjE0MDU0MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/14054229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ymaurer",
"html_url": "https://github.com/ymaurer",
"followers_url": "https://api.github.com/users/ymaurer/followers",
"following_url": "https://api.github.com/users/ymaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/ymaurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ymaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ymaurer/subscriptions",
"organizations_url": "https://api.github.com/users/ymaurer/orgs",
"repos_url": "https://api.github.com/users/ymaurer/repos",
"events_url": "https://api.github.com/users/ymaurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/ymaurer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-25T13:00:09 | 2023-05-25T16:50:27 | 2023-05-25T16:50:27 | CONTRIBUTOR | null | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5243/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5242 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5242/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5242/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5242/events | https://github.com/langchain-ai/langchain/issues/5242 | 1,725,751,778 | I_kwDOIPDwls5m3OHi | 5,242 | 'numpy._DTypeMeta' object is not subscriptable | {
"login": "prashantkanuru",
"id": 79887087,
"node_id": "MDQ6VXNlcjc5ODg3MDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/79887087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prashantkanuru",
"html_url": "https://github.com/prashantkanuru",
"followers_url": "https://api.github.com/users/prashantkanuru/followers",
"following_url": "https://api.github.com/users/prashantkanuru/following{/other_user}",
"gists_url": "https://api.github.com/users/prashantkanuru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prashantkanuru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prashantkanuru/subscriptions",
"organizations_url": "https://api.github.com/users/prashantkanuru/orgs",
"repos_url": "https://api.github.com/users/prashantkanuru/repos",
"events_url": "https://api.github.com/users/prashantkanuru/events{/privacy}",
"received_events_url": "https://api.github.com/users/prashantkanuru/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-25T12:43:16 | 2023-09-12T16:13:19 | 2023-09-12T16:13:18 | NONE | null | ### System Info
I am using langchain version of 0.0.176 and hitting the error of 'numpy._DTypeMeta' object is not subscriptable while using Chroma DB for carrying out any operation.
### Who can help?
@hwchase17 - please help me out with this error -- do I need to upgrade the version of Langchain to overcome this problem
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code snippets producing this behavior
1. docsearch=Chroma.from_texts(texts,embeddings,metadatas=[{"source":str(i)} for i in range(len(texts))]).as_retriever()
2. docsearch=Chroma.from_texts(texts,embeddings)
query="...."
docs=docsearch.similarity_search(query)
3. db1=Chroma.from_documents(docs_1,embeddings)
### Expected behavior
Should be able to use ChromaDb as a retriever without hitting any error. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5242/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5241 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5241/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5241/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5241/events | https://github.com/langchain-ai/langchain/issues/5241 | 1,725,733,180 | I_kwDOIPDwls5m3Jk8 | 5,241 | Issue: How to make a request into an agent/tool | {
"login": "cosmin-z",
"id": 61350549,
"node_id": "MDQ6VXNlcjYxMzUwNTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/61350549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cosmin-z",
"html_url": "https://github.com/cosmin-z",
"followers_url": "https://api.github.com/users/cosmin-z/followers",
"following_url": "https://api.github.com/users/cosmin-z/following{/other_user}",
"gists_url": "https://api.github.com/users/cosmin-z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cosmin-z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cosmin-z/subscriptions",
"organizations_url": "https://api.github.com/users/cosmin-z/orgs",
"repos_url": "https://api.github.com/users/cosmin-z/repos",
"events_url": "https://api.github.com/users/cosmin-z/events{/privacy}",
"received_events_url": "https://api.github.com/users/cosmin-z/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-25T12:31:15 | 2023-09-10T16:12:38 | 2023-09-10T16:12:37 | NONE | null | ### Issue you'd like to raise.
Hi everybody, does anyone know if there is a way to make a post request using a custom agent/tool? The idea is when the user need a specific thing the agent intercept it and the custom tool make it. I can't find anything useful in the documentation, the fact is that when I try it doesn't work.
In my case I have:
`class FlowTool(BaseTool):
name = "Call To Max"
description = "use the run function when the user ask to make a call to Max. You don't need any parameter"
def _run(self):
url = "https://ex.mex.com/web"
data = {
"prova": 'ciao'
}
response = requests.post(url, json=data, verify=False)
return 'done'
def _arun(self, radius: int):
raise NotImplementedError("This tool does not support async")`
`tools = [FlowTool()]
agent = initialize_agent(
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
)`
`agent("Can you make a call to mex?")`
Thank you for helping me
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5241/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5240 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5240/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5240/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5240/events | https://github.com/langchain-ai/langchain/pull/5240 | 1,725,728,622 | PR_kwDOIPDwls5RVw8Q | 5,240 | Resolve error in StructuredOutputParser docs | {
"login": "mwinterde",
"id": 60154509,
"node_id": "MDQ6VXNlcjYwMTU0NTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/60154509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mwinterde",
"html_url": "https://github.com/mwinterde",
"followers_url": "https://api.github.com/users/mwinterde/followers",
"following_url": "https://api.github.com/users/mwinterde/following{/other_user}",
"gists_url": "https://api.github.com/users/mwinterde/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mwinterde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mwinterde/subscriptions",
"organizations_url": "https://api.github.com/users/mwinterde/orgs",
"repos_url": "https://api.github.com/users/mwinterde/repos",
"events_url": "https://api.github.com/users/mwinterde/events{/privacy}",
"received_events_url": "https://api.github.com/users/mwinterde/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-25T12:28:10 | 2023-05-25T14:47:26 | 2023-05-25T14:47:26 | CONTRIBUTOR | null | # Resolve error in StructuredOutputParser docs
Documentation for `StructuredOutputParser` currently not reproducible, that is, `output_parser.parse(output)` raises an error because the LLM returns a response with an invalid format
```python
_input = prompt.format_prompt(question="what's the capital of france")
output = model(_input.to_string())
output
# ?
#
# ```json
# {
# "answer": "Paris",
# "source": "https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"
# }
# ```
```
Was fixed by adding a question mark to the prompt | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5240/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/5240",
"html_url": "https://github.com/langchain-ai/langchain/pull/5240",
"diff_url": "https://github.com/langchain-ai/langchain/pull/5240.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/5240.patch",
"merged_at": "2023-05-25T14:47:26"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/5239 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5239/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5239/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5239/events | https://github.com/langchain-ai/langchain/issues/5239 | 1,725,718,432 | I_kwDOIPDwls5m3F-g | 5,239 | Issue: ElasticsearchEmbeddings does not work on hosted elasticsearch (Platinum) | {
"login": "lemig",
"id": 78766,
"node_id": "MDQ6VXNlcjc4NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/78766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lemig",
"html_url": "https://github.com/lemig",
"followers_url": "https://api.github.com/users/lemig/followers",
"following_url": "https://api.github.com/users/lemig/following{/other_user}",
"gists_url": "https://api.github.com/users/lemig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lemig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lemig/subscriptions",
"organizations_url": "https://api.github.com/users/lemig/orgs",
"repos_url": "https://api.github.com/users/lemig/repos",
"events_url": "https://api.github.com/users/lemig/events{/privacy}",
"received_events_url": "https://api.github.com/users/lemig/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "jeffvestal",
"id": 53237856,
"node_id": "MDQ6VXNlcjUzMjM3ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/53237856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffvestal",
"html_url": "https://github.com/jeffvestal",
"followers_url": "https://api.github.com/users/jeffvestal/followers",
"following_url": "https://api.github.com/users/jeffvestal/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffvestal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffvestal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffvestal/subscriptions",
"organizations_url": "https://api.github.com/users/jeffvestal/orgs",
"repos_url": "https://api.github.com/users/jeffvestal/repos",
"events_url": "https://api.github.com/users/jeffvestal/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffvestal/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jeffvestal",
"id": 53237856,
"node_id": "MDQ6VXNlcjUzMjM3ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/53237856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffvestal",
"html_url": "https://github.com/jeffvestal",
"followers_url": "https://api.github.com/users/jeffvestal/followers",
"following_url": "https://api.github.com/users/jeffvestal/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffvestal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffvestal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffvestal/subscriptions",
"organizations_url": "https://api.github.com/users/jeffvestal/orgs",
"repos_url": "https://api.github.com/users/jeffvestal/repos",
"events_url": "https://api.github.com/users/jeffvestal/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffvestal/received_events",
"type": "User",
"site_admin": false
}
] | null | 5 | 2023-05-25T12:21:19 | 2023-05-31T07:40:33 | 2023-05-31T07:40:10 | NONE | null | ### Issue you'd like to raise.
LangChain 0.0.179, hosted elasticsearch (Platinum edition)
V0.0.179 introduced elasticsearch embeddings, great!
But it is only implemented for elastic cloud.
I want to be able to do embeddings on my own elastic cluster.
@jeffvestal @derickson
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5239/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5237 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5237/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5237/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5237/events | https://github.com/langchain-ai/langchain/issues/5237 | 1,725,616,952 | I_kwDOIPDwls5m2tM4 | 5,237 | Token limit reached trying to use plugin | {
"login": "daniellefranca96",
"id": 134293046,
"node_id": "U_kgDOCAEmNg",
"avatar_url": "https://avatars.githubusercontent.com/u/134293046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daniellefranca96",
"html_url": "https://github.com/daniellefranca96",
"followers_url": "https://api.github.com/users/daniellefranca96/followers",
"following_url": "https://api.github.com/users/daniellefranca96/following{/other_user}",
"gists_url": "https://api.github.com/users/daniellefranca96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daniellefranca96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daniellefranca96/subscriptions",
"organizations_url": "https://api.github.com/users/daniellefranca96/orgs",
"repos_url": "https://api.github.com/users/daniellefranca96/repos",
"events_url": "https://api.github.com/users/daniellefranca96/events{/privacy}",
"received_events_url": "https://api.github.com/users/daniellefranca96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-25T11:17:24 | 2023-09-10T16:12:44 | 2023-09-10T16:12:43 | NONE | null | ### System Info
When i try to use chatgpt plugin with agents like showed in the documentantion, some plugins like the MediumPluginGPT will reach the token limit during the task and give an error.
![image](https://github.com/hwchase17/langchain/assets/134293046/940e6342-6396-4910-be08-117567d5bfdc)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/1Pya_AaPucsgw__OJa0Xho1u8OI1xFqYB#scrollTo=Ri2RPTKrxF6b
### Expected behavior
Should return the ten new most recent about AI | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5237/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5236 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5236/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5236/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5236/events | https://github.com/langchain-ai/langchain/issues/5236 | 1,725,576,992 | I_kwDOIPDwls5m2jcg | 5,236 | Slots Filling in Langchain | {
"login": "Sadaf-Syeda",
"id": 59745163,
"node_id": "MDQ6VXNlcjU5NzQ1MTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/59745163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sadaf-Syeda",
"html_url": "https://github.com/Sadaf-Syeda",
"followers_url": "https://api.github.com/users/Sadaf-Syeda/followers",
"following_url": "https://api.github.com/users/Sadaf-Syeda/following{/other_user}",
"gists_url": "https://api.github.com/users/Sadaf-Syeda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sadaf-Syeda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sadaf-Syeda/subscriptions",
"organizations_url": "https://api.github.com/users/Sadaf-Syeda/orgs",
"repos_url": "https://api.github.com/users/Sadaf-Syeda/repos",
"events_url": "https://api.github.com/users/Sadaf-Syeda/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sadaf-Syeda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-25T10:50:38 | 2023-09-17T13:10:59 | 2023-09-15T16:10:58 | NONE | null | ### System Info
I need to use OpenAPI for calling an API , but that API needs some params in body, and that value needs to be taken from User,
I need to understand the way that can take slots name that needs to be filled from user , is there any wat to do this?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Need code
### Expected behavior
Slots filling from user | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5236/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5235 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5235/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5235/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5235/events | https://github.com/langchain-ai/langchain/issues/5235 | 1,725,555,032 | I_kwDOIPDwls5m2eFY | 5,235 | Support for ttl in DynamoDBChatMessageHistory | {
"login": "Benyuel",
"id": 5361725,
"node_id": "MDQ6VXNlcjUzNjE3MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5361725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Benyuel",
"html_url": "https://github.com/Benyuel",
"followers_url": "https://api.github.com/users/Benyuel/followers",
"following_url": "https://api.github.com/users/Benyuel/following{/other_user}",
"gists_url": "https://api.github.com/users/Benyuel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Benyuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Benyuel/subscriptions",
"organizations_url": "https://api.github.com/users/Benyuel/orgs",
"repos_url": "https://api.github.com/users/Benyuel/repos",
"events_url": "https://api.github.com/users/Benyuel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Benyuel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-25T10:35:27 | 2023-11-24T14:35:31 | 2023-09-10T16:12:54 | NONE | null | ### Feature request
Allow a user to specify a record ttl for messages/sessions persisted to dynamodb in https://github.com/hwchase17/langchain/blob/5cfa72a130f675c8da5963a11d416f553f692e72/langchain/memory/chat_message_histories/dynamodb.py#L17-L20.
### Motivation
This will allow automated purging of chat history after a specified time period.
### Your contribution
Maybe, depends on my available time. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5235/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5235/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5234 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5234/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5234/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5234/events | https://github.com/langchain-ai/langchain/issues/5234 | 1,725,509,252 | I_kwDOIPDwls5m2S6E | 5,234 | Make Redis Vector database operations Asynchronous | {
"login": "Octforce",
"id": 131253681,
"node_id": "U_kgDOB9LFsQ",
"avatar_url": "https://avatars.githubusercontent.com/u/131253681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Octforce",
"html_url": "https://github.com/Octforce",
"followers_url": "https://api.github.com/users/Octforce/followers",
"following_url": "https://api.github.com/users/Octforce/following{/other_user}",
"gists_url": "https://api.github.com/users/Octforce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Octforce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Octforce/subscriptions",
"organizations_url": "https://api.github.com/users/Octforce/orgs",
"repos_url": "https://api.github.com/users/Octforce/repos",
"events_url": "https://api.github.com/users/Octforce/events{/privacy}",
"received_events_url": "https://api.github.com/users/Octforce/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-25T10:04:53 | 2023-09-25T16:07:01 | 2023-09-25T16:07:01 | NONE | null | ### Feature request
There are other vector database that support the use of async in Langchain, adding Redis to those list would be better for programmers who use asynchronous programming in python. I believe with package like aioredis, this should be easily achievable.
### Motivation
The motivation to to be able to support python async programmers with this feature and also to boost performance when querying from the vector store and inserting data into the vector store.
### Your contribution
I can contribute by opening a PR or by testing the code once it is done. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5234/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5234/timeline | null | not_planned | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/5233 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5233/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/5233/comments | https://api.github.com/repos/langchain-ai/langchain/issues/5233/events | https://github.com/langchain-ai/langchain/issues/5233 | 1,725,368,096 | I_kwDOIPDwls5m1wcg | 5,233 | ChatVertexAI is not imported | {
"login": "marouane-tradelab",
"id": 10705961,
"node_id": "MDQ6VXNlcjEwNzA1OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/10705961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marouane-tradelab",
"html_url": "https://github.com/marouane-tradelab",
"followers_url": "https://api.github.com/users/marouane-tradelab/followers",
"following_url": "https://api.github.com/users/marouane-tradelab/following{/other_user}",
"gists_url": "https://api.github.com/users/marouane-tradelab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marouane-tradelab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marouane-tradelab/subscriptions",
"organizations_url": "https://api.github.com/users/marouane-tradelab/orgs",
"repos_url": "https://api.github.com/users/marouane-tradelab/repos",
"events_url": "https://api.github.com/users/marouane-tradelab/events{/privacy}",
"received_events_url": "https://api.github.com/users/marouane-tradelab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-25T08:46:26 | 2023-06-02T11:55:03 | 2023-06-02T11:55:03 | NONE | null | ### System Info
Hello,
I'm trying to use ChatVertexAI and I noticed that the following import is not working :
```python
from langchain.chat_models import ChatVertexAI
```
But this one is working correctly :
```python
from langchain.chat_models.vertexai import ChatVertexAI
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install the main branch: `pip install git+https://github.com/hwchase17/langchain.git`
2. try to import `from langchain.chat_models import ChatVertexAI`
3. try to import `from langchain.chat_models.vertexai import ChatVertexAI`
### Expected behavior
The import `from langchain.chat_models import ChatVertexAI` should work | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/5233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/5233/timeline | null | completed | null | null |