Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable streaming support for openai v1 #597

Merged
merged 3 commits into from Nov 11, 2023

Conversation

Alvaromah
Copy link
Collaborator

Why are these changes needed?

The OpenAI API, along with other LLM frameworks, offers streaming capabilities that enhance debugging workflows by eliminating the need to wait for complete responses, resulting in a more efficient and time-saving process.

This is a simple mechanism to support streaming.
Tested on openai v1.1.1.

To enable streaming just use this code:

llm_config={
    "config_list": config_list,
    # Enable, disable streaming (defaults to False)
    "stream": True,
}

assistant = autogen.AssistantAgent(
    name="assistant",
    llm_config=llm_config,
)

Related issue number

Related to #465, #217

Checks

@codecov-commenter
Copy link

codecov-commenter commented Nov 8, 2023

Codecov Report

Merging #597 (ef9ab06) into main (2a96e4d) will increase coverage by 14.86%.
Report is 11 commits behind head on main.
The diff coverage is 100.00%.

@@             Coverage Diff             @@
##             main     #597       +/-   ##
===========================================
+ Coverage   34.04%   48.90%   +14.86%     
===========================================
  Files          25       26        +1     
  Lines        3005     3253      +248     
  Branches      668      774      +106     
===========================================
+ Hits         1023     1591      +568     
+ Misses       1906     1504      -402     
- Partials       76      158       +82     
Flag Coverage Δ
unittests 48.57% <100.00%> (+14.59%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
autogen/oai/client.py 83.03% <100.00%> (+45.53%) ⬆️

... and 15 files with indirect coverage changes

Copy link
Collaborator

@sonichi sonichi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for remaking the PR. I have one minor comment.

autogen/oai/client.py Show resolved Hide resolved
@sonichi sonichi requested review from ragyabraham and a team November 8, 2023 15:01
@gagb
Copy link
Collaborator

gagb commented Nov 8, 2023

So this PR works for me! But I also noticed that it adds redundant prints to my console -- first the streamed output is printed and then entire sender->recipient msg. I am not sure I have suggestion for fix.

@ragyabraham
Copy link
Collaborator

@gagb yes you're right. I think this is happening because after the message has been streamed, the _process_received_message is called which calls the _print_received_message that prints the message. @Alvaromah a possible fix here is that we add a condition in the _process_received_message to only call _print_received_message when stream=False. What do you think??

@Alvaromah
Copy link
Collaborator Author

@gagb yes you're right. I think this is happening because after the message has been streamed, the _process_received_message is called which calls the _print_received_message that prints the message. @Alvaromah a possible fix here is that we add a condition in the _process_received_message to only call _print_received_message when stream=False. What do you think??

It could be a possible solution, but I see a couple of issues:

  1. How can we access 'stream' value within the '_process_received_message' function?
  2. The 'OpenAIWrapper' class may contain multiple configurations, and we would need to determine which one has been applied.

What do you think?

@Alvaromah
Copy link
Collaborator Author

@gagb yes you're right. I think this is happening because after the message has been streamed, the _process_received_message is called which calls the _print_received_message that prints the message. @Alvaromah a possible fix here is that we add a condition in the _process_received_message to only call _print_received_message when stream=False. What do you think??

It could be a possible solution, but I see a couple of issues:

  1. How can we access 'stream' value within the '_process_received_message' function?
  2. The 'OpenAIWrapper' class may contain multiple configurations, and we would need to determine which one has been applied.

What do you think?

Another solution could be to add a 'was_streamed' property to the response, indicating whether the result should be printed or not.

@yiranwu0
Copy link
Collaborator

yiranwu0 commented Nov 9, 2023

@gagb yes you're right. I think this is happening because after the message has been streamed, the _process_received_message is called which calls the _print_received_message that prints the message. @Alvaromah a possible fix here is that we add a condition in the _process_received_message to only call _print_received_message when stream=False. What do you think??

It could be a possible solution, but I see a couple of issues:

  1. How can we access 'stream' value within the '_process_received_message' function?
  2. The 'OpenAIWrapper' class may contain multiple configurations, and we would need to determine which one has been applied.

What do you think?

Another solution could be to add a 'was_streamed' property to the response, indicating whether the result should be printed or not.

Can we check the type from openai's return?
what I did:

        # If the result is a generator, process it as a stream
        if isinstance(result, types.GeneratorType):
           ...

@yiranwu0 yiranwu0 self-assigned this Nov 9, 2023
@Alvaromah
Copy link
Collaborator Author

I believe that the best solution would be to introduce a "was_streamed" property within the response object. This approach, however, would entail a refactoring of how we currently handle the response data.

Specifically, the method extract_text_or_function_call transforms the response object into a List[str], which inadvertently strips away valuable response metadata. This premature conversion is a concern because it leads to a loss of information that could be beneficial, not only for the present scenario but also for potential future use cases.

Nevertheless, I am aware that such a refactoring might be beyond the scope of this PR. My recommendation is to proceed with merging the current changes to integrate the initial version of streaming. This would allow us to leverage the streaming functionality sooner rather than later and evaluate the necessity of the proposed changes at a more appropriate time.

@sonichi
Copy link
Collaborator

sonichi commented Nov 9, 2023

Makes sense. Could you add a test though to cover the changed code? Also, please add the comments to the code about function streaming.

@yiranwu0
Copy link
Collaborator

yiranwu0 commented Nov 9, 2023

I believe that the best solution would be to introduce a "was_streamed" property within the response object. This approach, however, would entail a refactoring of how we currently handle the response data.

Specifically, the method extract_text_or_function_call transforms the response object into a List[str], which inadvertently strips away valuable response metadata. This premature conversion is a concern because it leads to a loss of information that could be beneficial, not only for the present scenario but also for potential future use cases.

Nevertheless, I am aware that such a refactoring might be beyond the scope of this PR. My recommendation is to proceed with merging the current changes to integrate the initial version of streaming. This would allow us to leverage the streaming functionality sooner rather than later and evaluate the necessity of the proposed changes at a more appropriate time.

Oh, hey. I just revisited the question. There is an alternative possible:

  • We don't need to modify anything here, but allow the create function to return a generator type from oai, and we do the streaming in conversable agents. We only need to check the return type of client.create / or if streaming is true.

For current solution, if one only uses oai.create with streaming, there will always be prints, then we want to introduce a verbose parameter.

@Alvaromah
Copy link
Collaborator Author

@kevin666aa I'm not sure if I understand your suggestion correctly.
Are you suggesting leaving the client.py class as it is and manage the streaming within the conversational agent?

If that's the case:

  • How is caching handled?
  • The conversational agent expects an array of text messages. Wouldn't the conversational agent's workflow need to be modified?

Have you managed to get this approach working?

Thanks

@ragyabraham
Copy link
Collaborator

ragyabraham commented Nov 9, 2023

@gagb yes you're right. I think this is happening because after the message has been streamed, the _process_received_message is called which calls the _print_received_message that prints the message. @Alvaromah a possible fix here is that we add a condition in the _process_received_message to only call _print_received_message when stream=False. What do you think??

It could be a possible solution, but I see a couple of issues:

  1. How can we access 'stream' value within the '_process_received_message' function?
  2. The 'OpenAIWrapper' class may contain multiple configurations, and we would need to determine which one has been applied.

What do you think?

Hey @Alvaromah that's a good point. Can we make stream a class variable so that it is accessible by _process_received_message??

@yiranwu0
Copy link
Collaborator

yiranwu0 commented Nov 10, 2023

@kevin666aa I'm not sure if I understand your suggestion correctly. Are you suggesting leaving the client.py class as it is and manage the streaming within the conversational agent?

If that's the case:

  • How is caching handled?
  • The conversational agent expects an array of text messages. Wouldn't the conversational agent's workflow need to be modified?

Have you managed to get this approach working?

Thanks

Hello @Alvaromah, you are right,

  1. the caching is a problem and I didn't think it through. I didn't have caching for streaming objects. One possible solution is update the cache each time and using yield (see the pseudo code).
  2. The second point won't be a issue. When you call oai in generate_oai_reply, you would do the check for streaming, print and parse your message there, and always return the message as array.
# in client.py
content = ""
for new_input in oai_generator:
    content += new_input
   cache.set(key, content) 
   yield new_input

# in conversable generate_oai_reply
response = client.create(...)
if isinstance(response, types.GeneratorType):
    do the print and parsing again

Overall, this alternative is more complex. My idea towards client.py is that it returns the same thing as openai's client has (even a generator, but with more features). Imagine I want to create an agent calling the client with streaming, and I want to process the streaming strings real-time before print it out. What do you think?

@sonichi
Copy link
Collaborator

sonichi commented Nov 10, 2023

@kevin666aa has a valid point that it's better to keep the create() method compatible with openai's create(), and deal with printing outside.

@Alvaromah
Copy link
Collaborator Author

Hi @kevin666aa, I like your approach, but it seems it might need additional adjustments, possibly even a refactoring of the workflow and how we handle responses—including content, functions, and caching, among other aspects.
Moreover, considering the recent developments regarding OpenAI's assistants and threads, we should prepare for further modifications.
This pull request was initially intended as a swift, non-disruptive enhancement to streaming responses, given that it isn't a core feature of the project. However, for a more robust and refined approach, it looks like we'll need to delve deeper into the codebase.

@Alvaromah
Copy link
Collaborator Author

Makes sense. Could you add a test though to cover the changed code? Also, please add the comments to the code about function streaming.

I have uploaded the tests and added the comments regarding the function streaming as requested.
While it's clear that this isn't the most optimal solution, it can serve as a temporary solution to support streaming.

@sonichi
Copy link
Collaborator

sonichi commented Nov 11, 2023

@Alvaromah
Copy link
Collaborator Author

test failure: https://github.com/microsoft/autogen/actions/runs/6829779356/job/18576570491?pr=597

Seems to be a problem with the configuration.

This is failing:

def test_completion_stream():
    config_list = config_list_from_json(
        env_or_file=OAI_CONFIG_LIST,
        file_location=KEY_LOC,
        filter_dict={"model": ["gpt-3.5-turbo-instruct"]},
    )

    client = OpenAIWrapper(config_list=config_list)

Changed with this, as in test_client.py:

def test_completion_stream():
    config_list = config_list_openai_aoai(KEY_LOC)
    client = OpenAIWrapper(config_list=config_list)

@sonichi
Copy link
Collaborator

sonichi commented Nov 11, 2023

@kevin666aa could you create an issue about the followup changes needed?

Copy link
Collaborator

@sonichi sonichi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

merging as a temporary solution.

@sonichi sonichi added this pull request to the merge queue Nov 11, 2023
Merged via the queue into microsoft:main with commit 849feda Nov 11, 2023
43 of 45 checks passed
@Alvaromah Alvaromah deleted the streaming-support-v1 branch November 12, 2023 02:01
@yiranwu0
Copy link
Collaborator

@kevin666aa could you create an issue about the followup changes needed?

Got it!

jfischburg-us added a commit to jfischburg-us/autogen that referenced this pull request Nov 12, 2023
* Improves clarity and fixes punctuation in README and Multi-agent documentation (microsoft#40)

* Improves clarity and fixes punctuation in README and Multi-agent documentation

* fix broken colab link to agentchat_groupchat_research.ipynb (others are fine)

* fix typos, improves readability

* make retry_time configurable, add doc (microsoft#53)

* make retry_time configurable, add doc

* in seconds

* retry_wait_time

* bump version to 0.1.4

* remove .json

* rename

* time

* Update no_update_context, fix upsert docs (microsoft#52)

* Update no_update_context, fix upsert docs

* Recreate only once

* Add comments to get_or_create

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update README.md (microsoft#54)

README update for my profile <3

* retrieve_utils.py - Updated.py to have the ability to parse text from PDF Files (microsoft#50)

* UPDATE - Updated retrieve_utils.py to have the ability to parse text from pdf files

* UNDO - change to recursive condition

* UPDATE - updated agentchat_RetrieveChat.ipynb to clarify which file types are accepted to be in the docs path

* ADD - missing import

* UPDATE - setup.py to have PyPDF2 in retrievechat

* RE-ADD - urls

* ADD - tests for retrieve utils, and removed deprecated PyPdf2

* Update agentchat_RetrieveChat.ipynb

* Update retrieve_utils.py

Fix format

* Update retrieve_utils.py

Replace print with logger

* UPDATE - added more specific exception to PDF decryption try/catch

* FIX - typo, return statement at wrong indentation in extract_text_from_pdf

---------

Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>
Co-authored-by: Li Jiang <bnujli@gmail.com>

* Bump version to 0.1.5 (microsoft#60)

* typing & docstr update (microsoft#59)

* typing & docstr update

* bump version to 0.1.5

* Updated readme.md : seprated AutoGen and EcoOptGen also removed bibtex (microsoft#43)

* Updated README.md added required changes to previous pull 

new changes :
1. A section containing citation to AutoGen and EcoOptiGen
2. Another section contain citation to MathChat
## Citation  
[AutoGen](https://arxiv.org/abs/2308.08155). 
AND  [EcoOptiGen](https://arxiv.org/abs/2303.04673).
``` 
bibtex
@inproceedings{wu2023autogen,
      title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework},
      author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Shaokun Zhang and Erkang Zhu and Beibin Li and Li Jiang and Xiaoyun Zhang and Chi Wang},
      year={2023},
      eprint={2308.08155},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

bibtex
@inproceedings{wang2023EcoOptiGen,
    title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference},
    author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah},
    year={2023},
    booktitle={AutoML'23},
}
```

 [MathChat](https://arxiv.org/abs/2306.01337). 

```
bibtex
@inproceedings{wu2023empirical,
    title={An Empirical Study on Challenging Math Problem Solving with GPT-4},
    author={Yiran Wu and Feiran Jia and Shaokun Zhang and Hangyu Li and Erkang Zhu and Yue Wang and Yin Tat Lee and Richard Peng and Qingyun Wu and Chi Wang},
    year={2023},
    booktitle={ArXiv preprint arXiv:2306.01337},
}
```

* Seperated AutoGen and EcoOptGen and removed 'bibtex'

## Citation  
[AutoGen](https://arxiv.org/abs/2308.08155). 
``` 
@inproceedings{wu2023autogen,
      title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework},
      author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Shaokun Zhang and Erkang Zhu and Beibin Li and Li Jiang and Xiaoyun Zhang and Chi Wang},
      year={2023},
      eprint={2308.08155},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
```

[EcoOptiGen](https://arxiv.org/abs/2303.04673).
```
@inproceedings{wang2023EcoOptiGen,
    title={Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference},
    author={Chi Wang and Susan Xueqing Liu and Ahmed H. Awadallah},
    year={2023},
    booktitle={AutoML'23},
}
```

* expand faq (microsoft#66)

* expand faq

* models

* fix format error

* minor fix (microsoft#31)

* minor fix for stablility

* fix format

* fix format

* update run_code logic

* format

* Update conversable_agent.py

* fix format

* Update conversable_agent.py

* add tests

* fix format

* revert changes

* Fixed MD Issue (microsoft#72)

* fix append_oai_message (microsoft#47)

* fix append_oai_message

* add testcase for groupchat

* add test_oai to openai workflow

* code formate

* update

* formate

* update

* enable openai workflow on fork branch (microsoft#82)

* update

* update

* Update openai.yml

* bump version to 0.1.6 (microsoft#85)

* spelling error (microsoft#84)

* Format issue (microsoft#69)

* Fixed formating issue in the README

* Fixed the formating issue in the README

* Updated formatting as per review comments

* Refactor README.md to highlight use cases and features

* Updated README as per feedback

* Updated README as per feedback

---------

Co-authored-by: Al-Iqram Elahee <hridoy@Al-Iqrams-MacBook-Pro.local>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Replace the use of `assert` in non-test code (microsoft#80)

* Replace `assert`s in the `conversable_agent` module with `if-log-raise`.

* Use a `logger` object in the `code_utils` module.

* Replace use of `assert` with `if-log-raise` in the `code_utils` module.

* Replace use of `assert` in the `math_utils` module with `if-not-raise`.

* Replace `assert` with `if` in the `oai.completion` module.

* Replace `assert` in the `retrieve_utils` module with an if statement.

* Add missing `not`.

* Blacken `completion.py`.

* Test `generate_reply` and `a_generate_reply` raise an assertion error
when there are neither `messages` nor a `sender`.

* Test `execute_code` raises an `AssertionError` when neither code nor
filename is provided.

* Test `split_text_to_chunks` raises when passed an invalid chunk mode.

* * Add `tiktoken` and `chromadb` to test dependencies as they're used in
the `test_retrieve_utils` module.

* Sort the test requirements alphabetically.

* Bump postcss from 8.4.18 to 8.4.31 in /website (microsoft#93)

Bumps [postcss](https://github.com/postcss/postcss) from 8.4.18 to 8.4.31.
- [Release notes](https://github.com/postcss/postcss/releases)
- [Changelog](https://github.com/postcss/postcss/blob/main/CHANGELOG.md)
- [Commits](postcss/postcss@8.4.18...8.4.31)

---
updated-dependencies:
- dependency-name: postcss
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* show github stars (microsoft#92)

* Docs: fixed typos and grammar (microsoft#94)

* openai_utils.py - functionality for instantiating config_list with a .env file (microsoft#68)

* FORMATTING

* UPDATE - OAI __init__.py

* ruff

* ADD - notebook covering oai API configuration options and their different purposes

* ADD openai util updates so that the function just assumes the same environment variable name for all models, also added functionality for adding API configurations like api_base etc.

* ADD - updates to config_list_from_dotenv and tests for openai_util testing, update example notebook

* UPDATE - added working config_list_from_dotenv() with passing tests, and updated notebook

* UPDATE - code and tests to potentially get around the window build permission error, used different method of producing temporary files

---------

Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>

* document about docker (microsoft#119)

* document about docker

* clarify

* dev container

* docs: typo fixed (microsoft#129)

* Fix broken link in README.md (microsoft#134)

The link to the documentation's FAQ#code-execution was broken because the 'docs' directory was missing in the original URL.

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* fix doc typo (microsoft#123)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* typo fixed (microsoft#127)

Co-authored-by: Sagar Desai <60027013+sagardesai-ml-mlops@users.noreply.github.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* bump version to 0.1.7 (microsoft#141)

* Title: Adjust shell language switch in execute_code for Docker usage (microsoft#139)

Description:
This commit modifies the conditional check in execute_code to ensure the switch to PowerShell on Windows only occurs when Docker is not being used. This keeps shell script execution consistent within a Docker container across all platforms, aligning with the function's intended behavior.

Co-authored-by: Xiaoyun Zhang <bigmiao.zhang@gmail.com>

* move citation before contributing (microsoft#154)

* bump version to 0.1.8

* add twitter account to website (microsoft#150)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update num tokens from text (microsoft#149)

* Improve num_tokens_from_text

* Format

* Update comments

* Improve docstrings

* add twitter account to start page (microsoft#159)

* Update termination logic (microsoft#155)

* Added twitter link to the contributing section (microsoft#162)

Co-authored-by: vidhula17 <catchvidhula@gmail.com>

* fix: replace gpt-35-turbo in model name to gpt-3.5-turbo so the name … (microsoft#138)

* fix: replace gpt-35-turbo in model name to gpt-3.5-turbo so the name string is in the current chat_model list

* ref: reformatted with black

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Output a warning if attempting to load the OAI_CONFIG_LIST from a file, but the file is not found. (microsoft#174)

* Warn if GroupChat is underpopulatd. (microsoft#170)

* Display a warning if use_docker evlauates to True but the python docker package is not available. (microsoft#172)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* bump version to 0.1.9 (microsoft#177)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Warn if oai.Completion is provided with an empty config_list (microsoft#178)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Bump version to 0.1.10 (microsoft#181)

* Add support to customized vectordb and embedding functions (microsoft#161)

* Add custom embedding function

* Add support to custom vector db

* Improve docstring

* Improve docstring

* Improve docstring

* Add support to customized is_termination_msg fucntion

* Add a test for customize vector db with lancedb

* Fix tests

* Add test for embedding_function

* Update docstring

* Fix typo in agentchat_MathChat.ipynb (microsoft#191)

requries -> requires

* Make getting started a little easier (microsoft#180)

* Update README.md

add codespace quick start

* add codespace

* update path

---------

Co-authored-by: Li Jiang <lijiang1@microsoft.com>

* Fix edge cases extracting code when the message's content is None (microsoft#205)

* Add md for faqs (microsoft#194)

* Add md for faq; Update readme

* Update TRANSPARENCY_FAQS.md

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update TRANSPARENCY_FAQS.md

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Remove trailing space

* Fix trailing space issue

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Fix typo in README.md (microsoft#193)

* Icons not centered in the "autogen_agentchat.png" (microsoft#169)

* Delete website/static/img/autogen_agentchat.png

* Add files via upload

* Addresses issue microsoft#199 (microsoft#200)

* Addresses issue microsoft#199

* Fixed comment to align with new behavior.

---------

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Improving logging in oai.completion to show token_count (microsoft#179)

* update

* update doc

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Update Getting-Started.md (microsoft#213)

Fixed minor typo

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Added comment about config_list in the README example microsoft#216 (microsoft#218)

* Update README.md

* Update README.md

* fix: be compatible with custom model without full response (microsoft#222)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Update Installation.md-with detailed explanation on add a period or newline microsoft#219  (microsoft#231)

* Update Installation.md

Usage and importance of Docker is explained in more precise and to the point

* Update website/docs/Installation.md

---------

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Spelling fixes. (microsoft#220)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Typo (microsoft#234)

* fix typo

* wording

* typo

* bump version to 0.1.11 (microsoft#242)

* bump version to 0.1.11

* fix format issue

* docstr updated for `use_docker` in `execute_code ` (microsoft#233)

* docstr updated

* fixed line 245

* Fixed line 246

* space_traling error Fix_01

* Revert space in '-'

* Fixed line 245 FIX_02

---------

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* document retrieve chat (microsoft#258)

* document retrieve chat

* Ran pre commit hook

* Added figure to notebook (microsoft#246)

Co-authored-by: Li Jiang <lijiang1@microsoft.com>

* Lancgain tool bridge (microsoft#262) (microsoft#263)

* Added a LangChain tooling bridge example

* Updated Doco

* Update agentchat_langchain.ipynb

remove key

---------

Co-authored-by: Elliot Wood <gigaflare_elliot@hotmail.com>
Co-authored-by: Li Jiang <lijiang1@microsoft.com>

* Add support to custom text spliter (microsoft#270)

* Add support to custom text spliter function and a list of files or urls

* Add parameter to retrieve_config, add tests

* Fix tests

* Fix tests

* Update FAQ.md, elaborate on how to customise docker image and pick 'python:3' to solve typical errors (microsoft#269)

* Update FAQ.md

* Update website/docs/FAQ.md

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Update website/docs/FAQ.md

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Update website/docs/FAQ.md

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

---------

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Add group chat and retrieve agent example (microsoft#227)

* Add group chat and retrieve agent example

* Fix link and models

* Support call rag in a group chat and not init with rag

* Fix n_results logic

* Update notebook

* Fix format

* Improve wording

* Update variable name

* Revert to main

* Update function call

* Update keys

* Update contents

* Update contents

* docs: added virtual environment setup process (microsoft#249)

* docs: added virtual environment setup process

* Update website/docs/Installation.md

---------

Co-authored-by: Li Jiang <bnujli@gmail.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update Getting-Started.md (microsoft#275)

Co-authored-by: Li Jiang <bnujli@gmail.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Bump @babel/traverse from 7.20.1 to 7.23.2 in /website (microsoft#283)

Bumps [@babel/traverse](https://github.com/babel/babel/tree/HEAD/packages/babel-traverse) from 7.20.1 to 7.23.2.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.23.2/packages/babel-traverse)

---
updated-dependencies:
- dependency-name: "@babel/traverse"
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Add a blog for RAG agents (microsoft#284)

* Init blog

* Update blog

* Add more contents

* Restore notebook

* function call filter in group chat (microsoft#294)

* function call filter in group chat

* find agents with function_map

* fix typo in website/blog/2023-05-18-GPT-adaptive-humaneval/index.mdx (microsoft#299)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Organize notebooks into logical groups microsoft#273 (microsoft#288)

* Organize notebooks into logical groups microsoft#273

* update multiagent group to mention >3 agents

* Update docs (microsoft#297)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Update README.md with Roadmap. (microsoft#304)

also removed ugly periods. fixes: microsoft#289 (comment)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* bump version to 0.1.12 (microsoft#301)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* doc: Update FAQ.md (microsoft#282)

Isssue microsoft#277

* Fix format and links in documentations (microsoft#310)

* Fix format and links

* Update title

* Sync title

* Add examples to installation

* update oai models (microsoft#316)

* guidance for contribution (microsoft#320)

* TeachableAgent (microsoft#278)

* Initial commit.

* Disable LLM response caching.

* Add teachability option to setup.py

* Modify test to use OAI_CONFIG_LIST as suggested in the docs.

* Expand unit test.

* Complete unit test.

* Add filter_dict

* details

* AnalysisAgent

* details

* More documentation and debug output.

* Support retrieval of any number of relevant memos, including zero.

* More robust analysis separator.

* cleanup

* teach_config

* refactoring

* For robustness, allow more flexibility on memo storage and retrieval.

* de-dupe the retrieved memos.

* Simplify AnalysisAgent. The unit tests now pass with gpt-3.5

* comments

* Add a verbosity level to control analyzer messages.

* refactoring

* comments

* Persist memory on disk.

* cleanup

* Use markdown to format retrieved memos.

* Use markdown in TextAnalyzerAgent

* Add another verbosity level.

* clean up logging

* notebook

* minor edits

* cleanup

* linter fixes

* Skip tests that fail to import openai

* Address reviewer feedback.

* lint

* refactoring

* Improve wording

* Improve code coverage.

* lint

* Use llm_config to control caching.

* lowercase notebook name

* Sort out the parameters passed through to ConversableAgent, and supply full docstrings for the others.

* lint

* Allow TextAnalyzerAgent to be given a different llm_config than TeachableAgent.

* documentation

* Modifications to run openai workflow.

* Test on just python 3.10.
Replace agent with agent teachable_agent as recommended.

* Test on python 3.9 instead of 3.10.

* Remove space from name -> teachableagent

---------

Co-authored-by: Li Jiang <bnujli@gmail.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* bump version to 0.1.13 (microsoft#333)

* add downloads stats in readme (microsoft#334)

* Added Roadmap to Getting-Started.md (microsoft#324)

* Resolves Typo Correction microsoft#336 (microsoft#338)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update README.md (microsoft#321)

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Updating Examples to follow new categorical structure. microsoft#273 (microsoft#327)

* Updating Examples to follow new categorical structure. microsoft#273

Addressing the remaining task for microsoft#273, I have copied over the changes from /Usecases to /Examples to follow the new categorical example notebooks structure.

* Add the new example notebook

---------

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>

* Update agentchat_auto_feedback_from_code_execution.ipynb (microsoft#342)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update agentchat_MathChat.ipynb (microsoft#341)

* Fix grammar and spelling in agentchat_chess.ipynb (microsoft#346)

* Improve readability for agentchat_function_call.ipynb (microsoft#347)

* Improve readability in agentchat_groupchat_research.ipynb (microsoft#350)

* Improve readability in agentchat_groupchat_research.ipynb

* Undo updates to system message.

* format (microsoft#358)

* Improve readability in oai_openai_utils.ipynb (microsoft#365)

* Improve readability in oai_completion.ipynb (microsoft#364)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Fix grammar for agentchat_web_info.ipynb (microsoft#362)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Improve readability for agentchat_two_users.ipynb (microsoft#361)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Fix grammar in agentchat_stream.ipynb (microsoft#357)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Fix grammar and spelling in agentchat_planning.ipynb (microsoft#356)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Fix the grammar and spelling in agentchat_human_feedback.ipynb (microsoft#354)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update agentchat_langchain.ipynb (microsoft#355)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* headsup about dependency change (microsoft#378)

* headsup about dependency change

* more change

* Add link to OptiGuide microsoft#371 (microsoft#376)

* OptiGuide Link

* Update AutoGen-AgentChat.md

* fixes

* Update docs in RetrieveChat notebook and Installation (microsoft#379)

* Update comments to make it more clear

* Update Installation

* config list for test (microsoft#395)

* Supporting MultiModal Models: an example with LLaVA Notebook (microsoft#286)

* LMM notebook

* Use "register_reply" instead.

* Loop check LLaVA non-empty response

* Run notebook

* Make the llava_call function more flexible

* Include API for LLaVA from Replicate

* LLaVA data format update x2
1. prompt formater function
2. conversation format with SEP

* Coding example added

* Rename "ImageAgent" -> "LLaVAAgent"

* Docstring and comments updates

* Debug notebook: Remote LLaVA tested

* Example 1: remove system message

* MultimodalConversableAgent added

* Add gpt4v_formatter

* LLaVA: update example 1

* LLaVA: Add link to "Table of Content"

* using thread safe timeout to allow code execution to be compatible with multi-threading/multi-processing (microsoft#224)

Co-authored-by: Li Jiang <lijiang1@microsoft.com>
Co-authored-by: Victor Dibia <chuvidi2003@gmail.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Fix tmp dir not exists (microsoft#401)

* Fix tmp dir not exists

* Update tests to make it more clear

* Add check if save path is not None

* feat: Qdrant vector store support (microsoft#303)

* feat: QdrantRetrieveUserProxyAgent

* fix: QdrantRetrieveUserProxyAgent docstring

* chore: batch of 500 all CPU cores

* chore: conditional import for tests

* chore: config parallel, batch 100

* chore: collection creation params

* chore: conditonal payload indexing
fastembed import check

* docs: notebook for QdrantRetrieveUserProxyAgent

* docs: update docs link

* docs: notebook examples update

* chore: hnsw, payload index reference

* docs: notebook docs_path update

* Update test/agentchat/test_qdrant_retrievechat.py

Co-authored-by: Li Jiang <bnujli@gmail.com>

* chore: update notebook output

* Fix format

---------

Co-authored-by: Li Jiang <bnujli@gmail.com>

* [Blocking Issue] Add tests dependencies for qdrant and fix chromadb errors (microsoft#435)

* Add tests dependencies for qdrant

* Update chromadb API

* Update chromadb API version

* Fix typehint

* Add py 3.9 condition

* Fix client creation error

* TeachableAgent blog post (microsoft#436)

* Authors

* initial checkin

* completed blog post

* trim trailing whitespace

* date

* Address reviewer feedback.

* Adds jupyter as a vscode extension, fix validation errors in devcontainer.json (microsoft#433)

* Adds jupyter as a vscode extension, fix validation errors in vscode (see https://containers.dev/supporting#visual-studio-code)

* Trim trailing whitespace

* Add newline to end of file

---------

Co-authored-by: Li Jiang <bnujli@gmail.com>

* Update FAQ section in documentation (microsoft#390)

* UPDATE - FAQ section in documentation

* FIX - formatting test failure

* FIX - added disclaimer

* pre-commit

* Update website/docs/FAQ.md

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update website/docs/FAQ.md

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update website/docs/FAQ.md

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* UPDATE - notebook and FAQ information for config_list_from_models

---------

Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Li Jiang <bnujli@gmail.com>

* Add token_count_util (microsoft#421)

* add token_count_util

* remove token_count from retrieval util

* format

* update dependency

* update test

* spelling fix for Update math_user_proxy_agent.py (microsoft#431)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* bump version to 0.1.14 (microsoft#400)

* bump version to 0.1.14

* endpoint

* test

* test

* add ipython to retrievechat dependency

* constraints

* target

* Update Installation.md (microsoft#456)

* Update Installation.md

Replace autogen->pyautogen in env setup to avoid confusion

Related issue: microsoft#211

* Update Installation.md

Add deactivation instructions

* Update website/docs/Installation.md

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update FAQ with workaround for Issue microsoft#251 (microsoft#405)

* Update FAQ with workaround for Issue microsoft#251

* Update website/docs/FAQ.md

* Update website/docs/FAQ.md

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Fix typo in README.md (microsoft#481)

Contributers -> Contributors

* Fix/async function and tool execution (microsoft#87)

* async run group chat

* conversible agent allow async functions to generate reply

* test for async execution

---------

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Adding async support to get_human_input (microsoft#466)

* Adding async support to get_human_input

* Adjust code for Code formatting testing fail

* Adjust the test_async_get_human_input.py to run async on test

* Adjust the test_async_get_human_input.py for pre-commit-check error

* Adjust the test_async_get_human_input.py for pre-commit-check error v2

* Adjust remove unnecessary register_reply

* Adjust test to use asyncio call

* Adjust go back to not use asyncio

* Added example .txt file for agentchat_langchain sample notebook (microsoft#373)

* Added example .txt file for agentchat_langchain sample notebook

* Update radius.txt

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Update README.md (microsoft#506)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Dev/v0.2 (microsoft#393)

* api_base -> base_url (microsoft#383)

* InvalidRequestError -> BadRequestError (microsoft#389)

* remove api_key_path; close microsoft#388

* close microsoft#402 (microsoft#403)

* openai client (microsoft#419)

* openai client

* client test

* _client -> client

* _client -> client

* extra kwargs

* Completion -> client (microsoft#426)

* Completion -> client

* Completion -> client

* Completion -> client

* Completion -> client

* support aoai

* fix test error

* remove commented code

* support aoai

* annotations

* import

* reduce test

* skip test

* skip test

* skip test

* debug test

* rename test

* update workflow

* update workflow

* env

* py version

* doc improvement

* docstr update

* openai<1

* add tiktoken to dependency

* filter_func

* async test

* dependency

* migration guide (microsoft#477)

* migration guide

* change in kwargs

* simplify header

* update optigude description

* deal with azure gpt-3.5

* add back test_eval_math_responses

* timeout

* Add back tests for RetrieveChat (microsoft#480)

* Add back tests for RetrieveChat

* Fix format

* Update dependencies order

* Fix path

* Fix path

* Fix path

* Fix tests

* Add not run openai on MacOS or Win

* Update skip openai tests

* Remove unnecessary dependencies, improve format

* Add py3.8 for testing qdrant

* Fix multiline error of windows

* Add openai tests

* Add dependency mathchat, remove unused envs

* retrieve chat is tested

* bump version to 0.2.0b1

---------

Co-authored-by: Li Jiang <bnujli@gmail.com>

* Added a simple Testbed tool for repeatedly running templated Autogen scenarios with tightly-controlled initial conditions. (microsoft#455)

* Initial commit of the autogen testbed environment.

* Fixed some typos in the Testbed README.md

* Added some stricter termination logic to the two_agent scenario, and swiched the logo task from finding Autogen's logo, to finding Microsoft's (it's easier)

* Added documentation to testbed code in preparation for PR

* Added a variation of HumanEval to the Testbed. It is also a reasonable example of how to integrate other benchmarks.

* Removed ChatCompletion.start_logging and related features. Added an explicit TERMINATE output to HumanEval to save 1 turn in each conversation.

* Added metrics utils script for HumanEval

* Updated the requirements in the README.

* Added documentation for HumanEval csv schemas

* Standardized on how the OAI_CONFIG_LIST is handled.

* Removed dot-slash from 'includes' path for cross-platform compatibility

* Missed a file.

* Updated readme to include known-working versions.

* Fix typo import autogen (microsoft#549)

* Add support to unstructrued (microsoft#501)

* Add support to unstructrued

* Fix tests

* Add test and documents

* Fix tests

* Fix tests

* Test unstructured on linux and mac

* Update TRANSPARENCY_FAQS.md (microsoft#492)

fixed grammatical error
FAQ--->FAQs

Co-authored-by: gagb <gagb@users.noreply.github.com>

* Update README.md (microsoft#507)

Fixed small typos

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* fix wrong 'Langchain Provided Tools as Functions' doc ref (microsoft#495)

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* copy dicts before modifying (microsoft#551)

* copy dicts before modifying

* update notebooks

* update notebooks

* close microsoft#567

* Large Multimodal Models in AgentChat (microsoft#554)

* LMM Code added

* LLaVA notebook update

* Test cases and Notebook modified for OpenAI v1

* Move LMM into contrib
To resolve test issues and deploy issues
In the future, we can install pillow by default, and then move back
LMM agents into agentchat

* LMM test setup update

* try...except... clause for LMM tests

* disable patch for llava agent test
To resolve dependencies issue for build

* Add LMM Blog

* Change docstring for LMM agents

* Docstring update patch

* llava: insert reply at position 1 now
So, it can still handle human_input_mode
and max_consecutive_reply

* Resolve comments
Fixing: typos, blogs, yml, and add OpenAIWrapper

* Signature typo fix for LMM agent: system_message

* Update LMM "content" from latest OpenAI release
Reference  https://platform.openai.com/docs/guides/vision

* update LMM test according to latest OpenAI release

* Fully support GPT-4V now
1. Add a notebook for GPT-4V. LLava notebook also updated.
2. img_utils updated
3. GPT-4V formatter now return base64 image with mime type
4. Infer mime type directly from b64 image content (while loading
   without suffix)
5. Test cases modified according to all the related changes.

* GPT-4V link updated in blog

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* update version of openai dependency (microsoft#588)

* Notebook/hierarchy flow (microsoft#482)

* Notebook showing how to use select speaker to control conversation flow.

* pytest associated with notebook.

* Added llm_config to assistant and user proxy agent, and clarified why we set use_cache to false, as requested in the review.

* Added a @pytest.mark.skipif decorator like other tests to run it only in one py version, 3.10

* Fixed config warning.

* Removd llm_config to UserProxyAgent

* Fixed minor typos.

* Reran outputs

* Remopved llm_config from user_proxy_agent

* Colab Badge link updated.

* pre-commit formatting changes.

* Fixed base_url

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* seed -> cache_seed (microsoft#600)

* Added link to the new notebook (microsoft#594)

* update return type of WolframAlphaAPIWrapper.run() (microsoft#523)

* update return type of WolframAlphaAPIWrapper.run

* replace tuple by typing.Tuple

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Yiran Wu <32823396+kevin666aa@users.noreply.github.com>

* news update (microsoft#609)

* news update

* separate lines

* Add EcoAssistant to the research page (microsoft#612)

* Update Research.md

* Update Research.md

* Update Research.md

* Add CompressibleAgent (microsoft#443)

* api_base -> base_url (microsoft#383)

* InvalidRequestError -> BadRequestError (microsoft#389)

* remove api_key_path; close microsoft#388

* close microsoft#402 (microsoft#403)

* openai client (microsoft#419)

* openai client

* client test

* _client -> client

* _client -> client

* extra kwargs

* Completion -> client (microsoft#426)

* Completion -> client

* Completion -> client

* Completion -> client

* Completion -> client

* support aoai

* fix test error

* remove commented code

* support aoai

* annotations

* import

* reduce test

* skip test

* skip test

* skip test

* debug test

* rename test

* update workflow

* update workflow

* env

* py version

* doc improvement

* docstr update

* openai<1

* add compressibleagent

* revise doc, add tests, add example

* fix link

* fix link

* fix link

* remove test

* update doc

* update doc

* add tiktoken to dependency

* filter_func

* async test

* dependency

* revision

* migration guide (microsoft#477)

* migration guide

* change in kwargs

* simplify header

* update optigude description

* update for dev

* revision

* revision

* allow not compressing last n msgs

* update

* correct merge

* update test workflow

* check test

* update for test

* update

* update notebook

* update

* fix bug

* update

* update

* update

* check to "pull_request_target" in contrib-openai

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* add AutoGen paper info at the beginning of readme (microsoft#621)

* add paper info on top of readme

* update info

* add paper info

* revise date

* Update oai_completion.ipynb (microsoft#623)

Missing import

* Added warnings for some GroupChat misconfigurations and selection errors (microsoft#603)

* Added warnings for some GroupChat misconfigurations and selection errors

* Fixed formatting

* Introducing Experimental GPT Assistant Agent  in AutoGen (microsoft#616)

* add gpt assistant agent

* complete code

* Inherit class ConversableAgent

* format code

* add code comments

* add test case

* format code

* fix test

* format code

* Improve GPTAssistant

* Use OpenAIWrapper to create client
* Implement clear_history()
* Reply message formatting improvements
* Handle the case when content contains image files

* README update

* Fix doc string of methods

* add multiple conversations support

* Add GPT Assistant Agent into README

* fix test

---------

Co-authored-by: gagb <gagb@users.noreply.github.com>
Co-authored-by: Beibin Li <beibin79@gmail.com>

* added twitter(X) banner + link to readme (microsoft#615)

* added twitter(X) banner + link to readme

* Update README.md

Fix typo in the label

---------

Co-authored-by: gagb <gagb@users.noreply.github.com>

* Enable streaming support for openai v1 (microsoft#597)

* Enable streaming support for openai v1

* Added tests for openai client streaming

* Fix test_completion_stream

* improve readme (microsoft#630)

* improve readme

* Update README.md

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Handled possible unclear IndexError in ConversableAgent.last_message method (microsoft#622)

* Handled possible IndexError in ConversableAgent.last_message method with more clear error message and added test in test_conversable_agent.py.

* Fix code formatting issues.

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* Fix test error of compressible agent (microsoft#631)

* fix bug in test

* update workflow

* update

* deepcopy to copy

* Fix docstring of get_or_create (microsoft#583)

* Fix docstring of get_or_create

* Improve docstring

* Refactor GPTAssistantAgent  (microsoft#632)

* Refactor GPTAssistantAgent constructor to handle
instructions and overwrite_instructions flag

- Ensure that `system_message` is always consistent with `instructions`
- Ensure provided instructions are always used
- Add option to permanently modify the instructions of the assistant

* Improve default behavior

* Add a test; add method to delete assistant

* Add a new test for overwriting instructions

* Add test case for when no instructions are given for existing assistant

* Add pytest markers to test_gpt_assistant.py

* add test in workflow

* update

* fix test_client_stream

* comment out test_hierarchy_

---------

Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: kevin666aa <yrwu000627@gmail.com>

* uncomment test (microsoft#640)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Juanma Cuevas <jumacuca@gmail.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Li Jiang <bnujli@gmail.com>
Co-authored-by: Ali Eren SALKIM <129160641+AlyrenN@users.noreply.github.com>
Co-authored-by: Aaron <aaronlaptop12@hotmail.com>
Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>
Co-authored-by: Priyanshu Yashwant Deshmukh <69320370+priyansh4320@users.noreply.github.com>
Co-authored-by: Xiaoyun Zhang <bigmiao.zhang@gmail.com>
Co-authored-by: Hiftie <127197446+hiftielabs@users.noreply.github.com>
Co-authored-by: Yiran Wu <32823396+kevin666aa@users.noreply.github.com>
Co-authored-by: Shaurya Rohatgi <shauryr@gmail.com>
Co-authored-by: Al-Ekram Elahee Hridoy <aliqramalaheehridoy@gmail.com>
Co-authored-by: Al-Iqram Elahee <hridoy@Al-Iqrams-MacBook-Pro.local>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Co-authored-by: Mohamed Attia <mu.attiyah@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ujjwal gupta <guptarickey3@gmail.com>
Co-authored-by: lars.gersmann <lars.gersmann@gmail.com>
Co-authored-by: Hyung-Taik Choi <htc.refactor@gmail.com>
Co-authored-by: Tristan Murphy <72839119+HyperCodec@users.noreply.github.com>
Co-authored-by: Sagar Desai <60027013+SDcodehub@users.noreply.github.com>
Co-authored-by: Sagar Desai <60027013+sagardesai-ml-mlops@users.noreply.github.com>
Co-authored-by: mrauter1 <marcelorauter@gmail.com>
Co-authored-by: Manish Kumar <51908018+manish7017@users.noreply.github.com>
Co-authored-by: Olaoluwa Ademola Salami <oyomafia@gmail.com>
Co-authored-by: Olaoluwa Ademola Salami <olaoluwaasalami@gmail.com>
Co-authored-by: Vidhula <58629266+vidhula17@users.noreply.github.com>
Co-authored-by: vidhula17 <catchvidhula@gmail.com>
Co-authored-by: Allen Shi <33379392+AllenJShi@users.noreply.github.com>
Co-authored-by: afourney <adam.fourney@gmail.com>
Co-authored-by: afourney <adamfo@microsoft.com>
Co-authored-by: Li Jiang <lijiang1@microsoft.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: wayliums <wayliums@users.noreply.github.com>
Co-authored-by: Manuel Saelices <msaelices@gmail.com>
Co-authored-by: gagb <gagb@users.noreply.github.com>
Co-authored-by: Deepanshu <91846266+creator0131@users.noreply.github.com>
Co-authored-by: Gaëtan H <gaetanhus@gmail.com>
Co-authored-by: Javid Jamae <javidjamae@gmail.com>
Co-authored-by: Sheetali Maity <74114936+smty2018@users.noreply.github.com>
Co-authored-by: James Tsang <wtzeng1@gmail.com>
Co-authored-by: Hitesh Bansal <83907989+05hiteshbansal@users.noreply.github.com>
Co-authored-by: Shruti Patel <shruti222patel@users.noreply.github.com>
Co-authored-by: Gourav <herculeswarrior.in@gmail.com>
Co-authored-by: Elliot Wood <gigaflare_elliot@hotmail.com>
Co-authored-by: Maxim Saplin <smaxmail@gmail.com>
Co-authored-by: Ayush Kumar Pandit <31253617+Ayushpanditmoto@users.noreply.github.com>
Co-authored-by: Surav Shrestha <148448735+suravkshrestha@users.noreply.github.com>
Co-authored-by: Victor Dibia <chuvidi2003@gmail.com>
Co-authored-by: Haseeb Ansari <47222685+haseeb-xd@users.noreply.github.com>
Co-authored-by: Ricky Loynd <riloynd@microsoft.com>
Co-authored-by: Sean Connelly <47223469+2good4hisowngood@users.noreply.github.com>
Co-authored-by: Ishita Pathak <75848598+IshitaPathak@users.noreply.github.com>
Co-authored-by: Ansh Babbar <31804810+rabbabansh@users.noreply.github.com>
Co-authored-by: Beibin Li <BeibinLi@users.noreply.github.com>
Co-authored-by: Ragy Abraham <52903382+ragyabraham@users.noreply.github.com>
Co-authored-by: Anush <anushshetty90@gmail.com>
Co-authored-by: Craig Presti <146438+craigomatic@users.noreply.github.com>
Co-authored-by: rajpal <codingdrone@gmail.com>
Co-authored-by: Marc Green <marcgreen@users.noreply.github.com>
Co-authored-by: Aayush Chhabra <aayushgen@gmail.com>
Co-authored-by: bonadio <cesar.bonadio@gmail.com>
Co-authored-by: Jason Holtkamp <holtkam2@gmail.com>
Co-authored-by: Aditya <114663382+AaadityaG@users.noreply.github.com>
Co-authored-by: hung_ng__ <51025722+hung-ngm@users.noreply.github.com>
Co-authored-by: gfggithubleet <144522681+gfggithubleet@users.noreply.github.com>
Co-authored-by: Vatsalya Vyas <vatsalyavyas@gmail.com>
Co-authored-by: AkariLan <850439027@qq.com>
Co-authored-by: Joshua Kim <joshkyh@users.noreply.github.com>
Co-authored-by: 1073710317 <1073710317@163.com>
Co-authored-by: Jieyu Zhang <jieyuz2@cs.washington.edu>
Co-authored-by: Andreas Volkmann <avolkmann@icloud.com>
Co-authored-by: Ian <ArGregoryIan@gmail.com>
Co-authored-by: Beibin Li <beibin79@gmail.com>
Co-authored-by: Malik Muhammad Moaz <40146994+malikmmoaz@users.noreply.github.com>
Co-authored-by: Alvaro Mateos <Alvaromah@users.noreply.github.com>
Co-authored-by: wonderful <3269753363@qq.com>
Co-authored-by: kevin666aa <yrwu000627@gmail.com>
@bitnom bitnom mentioned this pull request Jan 2, 2024
3 tasks
whiskyboy pushed a commit to whiskyboy/autogen that referenced this pull request Apr 17, 2024
* Enable streaming support for openai v1

* Added tests for openai client streaming

* Fix test_completion_stream
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants