initial commit

This commit is contained in:
zhangjingqiang 2023-03-09 17:55:45 +08:00
commit 13716f4923
1425 changed files with 163227 additions and 0 deletions

17
.gitignore vendored Normal file
View file

@ -0,0 +1,17 @@
# IDE
.idea/
# Cargo
.cargo
/target/
# Sphinx
g3proxy/doc/_build
# deb package
/debian/
# tmp file
g3proxy/service/g3proxy@.service

4
CHANGELOG Normal file
View file

@ -0,0 +1,4 @@
There is no CHANGELOG at the workspace level.
You can find CHANGELOG for each component in their own directory.

127
CODE_OF_CONDUCT.md Normal file
View file

@ -0,0 +1,127 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socioeconomic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

61
CONTRIBUTING.md Normal file
View file

@ -0,0 +1,61 @@
# Contributing
Thank you for investing your time in contributing to <NAME> project!
Read our [Code of Conduct](CODE_OF_CONDUCT.md) to keep our community approachable and respectable.
This guide details how to use issues and pull requests to improve <NAME> project.
## General Guidelines
### Pull Requests
Make sure to keep Pull Requests small and functional to make them easier to review, understand, and look up in commit history. This repository uses "Squash and Commit" to keep our history clean and make it easier to revert changes based on PR.
Adding the appropriate documentation, unit tests and e2e tests as part of a feature is the responsibility of the feature owner, whether it is done in the same Pull Request or not.
Pull Requests should follow the "subject: message" format, where the subject describes what part of the code is being modified.
Refer to the template for more information on what goes into a PR description.
### Design Docs
A contributor proposes a design with a PR on the repository to allow for revisions and discussions. If a design needs to be discussed before formulating a document for it, make use of Google doc and GitHub issue to involve the community on the discussion.
### GitHub Issues
GitHub Issues are used to file bugs, work items, and feature requests with actionable items/issues (Please refer to the "Reporting Bugs/Feature Requests" section below for more information).
### Reporting Bugs/Feature Requests
We welcome you to use the GitHub issue tracker to report bugs or suggest features that have actionable items/issues (as opposed to introducing a feature request on GitHub Discussions).
When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
- A reproducible test case or series of steps
- The version of the code being used
- Any modifications you've made relevant to the bug
- Anything unusual about your environment or deployment
## Contributing via Pull Requests
### Find interesting issue
If you spot a problem with the problem, [search if an issue already exists](https://github.com/bytedance/g3-ose/issues). If a related issue doesn't exist, you can open a new issue using [issue template](https://github.com/bytedance/g3-ose/issues/new/choose).
### Solve an issue
Please check `DEVELOPMENT.md` in sub folder to get familiar with running and testing codes.
### Open a Pull request.
When you're done making the changes, open a pull request and fill PR template, so we can better review your PR. The template helps reviewers understand your changes and the purpose of your pull request.
Don't forget to link PR to issue if you are solving one.
If you run into any merge issues, checkout this [git tutorial](https://lab.github.com/githubtraining/managing-merge-conflicts) to help you resolve merge conflicts and other issues.
## Finding contributions to work on
Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wont fix), looking at any 'help wanted' and 'good first issue' issues are a great place to start.

3556
Cargo.lock generated Normal file

File diff suppressed because it is too large Load diff

65
Cargo.toml Normal file
View file

@ -0,0 +1,65 @@
[workspace]
members = [
"lib/g3-types",
"lib/g3-io-ext",
"lib/g3-daemon",
"lib/g3-socket",
"lib/g3-signal",
"lib/g3-compat",
"lib/g3-clap",
"lib/g3-yaml",
"lib/g3-json",
"lib/g3-msgpack",
"lib/g3-runtime",
"lib/g3-resolver",
"lib/g3-encoding",
"lib/g3-datetime",
"lib/g3-stdlog",
"lib/g3-syslog",
"lib/g3-journal",
"lib/g3-fluentd",
"lib/g3-statsd",
"lib/g3-xcrypt",
"lib/g3-ftp-client",
"lib/g3-http",
"lib/g3-h2",
"lib/g3-icap-client",
"lib/g3-socks",
"lib/g3-dpi",
"lib/g3-tls-cert",
"g3bench",
"g3rcgen",
"g3proxy",
"g3proxy/proto",
"g3proxy/utils/ctl",
"g3proxy/utils/ftp",
"g3proxy/utils/lua",
"g3tiles",
"g3tiles/proto",
"g3tiles/utils/ctl",
"demo/test-int-signal",
"demo/test-tcp-relay",
"demo/test-resolver",
"demo/test-copy-yield",
]
default-members = [
"g3bench",
"g3rcgen",
"g3proxy",
"g3proxy/utils/ctl",
"g3tiles",
"g3tiles/utils/ctl",
]
[profile.release-lto]
inherits = "release"
strip = true
lto = true
[profile.release-dbg]
inherits = "release"
debug = 1
debug-assertions = false
[patch.crates-io]
metadeps = { version = "1.1.2", git = "https://github.com/zh-jq/metadeps.git", branch = "modernize" }

201
LICENSE Normal file
View file

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2022 张敬强
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

110
README.md Normal file
View file

@ -0,0 +1,110 @@
[![minimum rustc: 1.66](https://img.shields.io/badge/minimum%20rustc-1.66-green?logo=rust)](https://www.whatrustisit.com)
[![License: Apache 2.0](https://img.shields.io/badge/license-Apache_2.0-blue.svg)](LICENSE)
# G3 Project
## About
This is the project we used to build enterprise-oriented general proxy solutions,
including but not limited to proxy / reverse proxy / load balancer / nat traversal.
## Components
G3 Project is make up of many components.
The project level docs resides in the *doc* subdirectory, and you should see the links below for the important ones.
Each component will have its own doc in its *doc* subdirectory.
### g3proxy
A general forward proxy solution, but you can also use it as tcp streaming / transparent proxy / reverse proxy
as we have basic support built in.
See [g3proxy](g3proxy/README.md) for detailed introduction.
### g3tiles
A work in progress reverse proxy solution.
### g3bench
A benchmark tool for the test of g3proxy.
### g3rcgen
A certificate generator for g3proxy.
## Dev-env Setup Guide
Follow [dev-setup](doc/dev-setup.md).
## Standards
Follow [standards](doc/standards.md).
## Release and Packaging
We will set tags for each release of each component, in the form *\<name\>-v\<version\>*.
You can use these tags to generate source tarballs.
And we have added deb and rpm package files for each component that is ready for distribution.
If you want to do a release build:
1. generate a release tarball
```shell
./scripts/release/build_tarball.sh <name>-v<version>
```
All vendor sources will be added to the source tarball, so you can save the source tarball and build it offline at
anywhere that have the compiler and dependencies installed.
2. build the package
For deb package:
```shell
tar xf <name>-<version>.tar.xz
cd <name>-<version>
./build_deb_from_tar.sh
```
For rpm package:
```shell
tar xvf <name>-<version>.tar.xz ./<name>-<version>/<name>.spec
cp <name>-<version>.tar.xz ~/rpmbuild/SOURCES/
rpmbuild -ba ./<name>-<version>/<name>.spec
```
If you want to build a package directly from the git repo:
- For deb package:
```shell
./build_deb_from_git.sh <name>
```
- For rpm package:
```shell
./build_rpm_from_git.sh <name>
```
## Contribution
Please check [Contributing](CONTRIBUTING.md) for more details.
## Code of Conduct
Please check [Code of Conduct](CODE_OF_CONDUCT.md) for more details.
## Security
If you discover a potential security issue in this project, or think you may
have discovered a security issue, we ask that you notify Bytedance Security via our
[security center](https://security.bytedance.com/src) or [vulnerability reporting email](mailto:sec@bytedance.com).
Please do **not** create a public GitHub issue.
## License
This project is licensed under the [Apache-2.0 License](LICENSE).

View file

@ -0,0 +1,10 @@
[package]
name = "test-copy-yield"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
tokio = { version = "1.23", features = ["time", "macros", "rt"] }
g3-io-ext = { path = "../../lib/g3-io-ext" }

View file

@ -0,0 +1,74 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::future::poll_fn;
use std::io;
use std::pin::Pin;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::task::{Context, Poll};
use std::time::Duration;
use tokio::io::{AsyncRead, ReadBuf};
use tokio::signal::unix::{signal, SignalKind};
use tokio::time::Instant;
static TOTAL_FILLED: AtomicUsize = AtomicUsize::new(0);
struct AlwaysFill {
c: u8,
}
impl AsyncRead for AlwaysFill {
fn poll_read(
self: Pin<&mut Self>,
_cx: &mut Context<'_>,
buf: &mut ReadBuf<'_>,
) -> Poll<io::Result<()>> {
let remaining = buf.remaining();
let b = buf.initialize_unfilled_to(remaining);
b.fill(self.c);
buf.advance(remaining);
TOTAL_FILLED.fetch_add(remaining, Ordering::Relaxed);
Poll::Ready(Ok(()))
}
}
#[tokio::main(flavor = "current_thread")]
async fn main() {
let mut signal = signal(SignalKind::interrupt()).unwrap();
tokio::spawn(async move {
poll_fn(|cx| signal.poll_recv(cx)).await;
// Crtl-C won't be handled if no yield within copy, use Ctrl-\ to quit
println!("received interrupt signal");
std::process::exit(-1);
});
tokio::spawn(async {
let mut reader = AlwaysFill { c: b'A' };
let mut sink = tokio::io::sink();
println!("start copy");
let _ = g3_io_ext::LimitedCopy::new(&mut reader, &mut sink, &Default::default()).await;
// let _ = tokio::io::copy(&mut reader, &mut sink).await;
});
let time_start = Instant::now();
tokio::time::sleep(Duration::from_secs(4)).await;
let total_filled = TOTAL_FILLED.load(Ordering::Relaxed);
println!(
"exit after {:?}, total filled: {total_filled}",
time_start.elapsed(),
);
}

View file

@ -0,0 +1,10 @@
[package]
name = "test-int-signal"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
tokio = {version = "1.0", features = ["rt-multi-thread", "macros"]}
g3-signal = {path = "../../lib/g3-signal"}

View file

@ -0,0 +1,39 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use tokio::signal::unix::SignalKind;
use g3_signal::{ActionSignal, SigResult};
fn do_at_quit(count: u32) -> SigResult {
match count {
1 => {
println!("press 'Ctrl-C' again to quit");
SigResult::Continue
}
_ => {
println!("quit");
SigResult::Break
}
}
}
#[tokio::main]
async fn main() {
let sig = ActionSignal::new(SignalKind::interrupt(), &do_at_quit).unwrap();
println!("SIGINT registered, press 'Ctrl-C' to quit");
sig.await;
}

View file

@ -0,0 +1,16 @@
[package]
name = "test-resolver"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
tokio = { version = "1.0", features = ["rt"] }
log = { version = "0.4", features = ["max_level_trace", "release_max_level_info"] }
slog = { version = "2", features = ["max_level_trace", "release_max_level_info"] }
slog-scope = "4"
slog-stdlog = "4"
g3-types = { path = "../../lib/g3-types", features = ["async-log"] }
g3-resolver = { path = "../../lib/g3-resolver", features = ["trust-dns"] }
g3-stdlog = { path = "../../lib/g3-stdlog" }

View file

@ -0,0 +1,65 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::net::IpAddr;
use std::str::FromStr;
use log::info;
use slog::{slog_o, Drain};
use slog_scope::GlobalLoggerGuard;
use g3_resolver::{
driver::trust_dns::TrustDnsDriverConfig, AnyResolveDriverConfig, ResolverBuilder,
ResolverConfig,
};
use g3_types::log::AsyncLogConfig;
fn setup_log() -> Result<GlobalLoggerGuard, log::SetLoggerError> {
let async_conf = AsyncLogConfig::default();
let drain = g3_stdlog::new_async_logger(&async_conf, true);
let logger = slog::Logger::root(drain.fuse(), slog_o!());
let scope_guard = slog_scope::set_global_logger(logger);
slog_stdlog::init_with_level(log::Level::Trace)?;
Ok(scope_guard)
}
fn main() {
let _logger_guard = setup_log().unwrap();
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap();
rt.block_on(async {
let mut config = TrustDnsDriverConfig::default();
config.add_server(IpAddr::from_str("223.5.5.5").unwrap());
let config = ResolverConfig {
name: String::new(),
driver: AnyResolveDriverConfig::TrustDns(config),
runtime: Default::default(),
};
let resolver = ResolverBuilder::new(config).build().unwrap();
let handle = resolver.get_handle();
let mut job = handle.get_v4("www.xjtu.edu.cn".to_string()).unwrap();
let data = job.recv().await;
info!("data: {:?}", data);
let mut job = handle.get_v4("www.xjtu.edu.cn".to_string()).unwrap();
let data = job.recv().await;
info!("data: {:?}", data);
});
}

View file

@ -0,0 +1,12 @@
[package]
name = "test-tcp-relay"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
tokio = { version = "1.0", features = ["rt-multi-thread", "macros", "net", "io-util"] }
futures-util = "0.3"
once_cell = "1.7"
g3-io-ext = { path = "../../lib/g3-io-ext" }

View file

@ -0,0 +1,17 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
pub mod stats;

View file

@ -0,0 +1,78 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::env;
use std::io;
use std::str::FromStr;
use std::sync::Arc;
use futures_util::future::try_join;
use once_cell::sync::Lazy;
use tokio::net::{TcpListener, TcpStream};
use g3_io_ext::{LimitedReader, LimitedWriter};
use test_tcp_relay::stats::{CltStats, TaskStats, UpsStats};
static LISTEN_ADDR: Lazy<String> =
Lazy::new(|| env::var("TEST_LISTEN_ADDR").unwrap_or_else(|_| "127.0.0.1:10086".to_string()));
static CONNECT_ADDR: Lazy<String> =
Lazy::new(|| env::var("TEST_CONNECT_ADDR").unwrap_or_else(|_| "127.0.0.1:5201".to_string()));
static SHIFT_MILLIS_STR: Lazy<String> =
Lazy::new(|| env::var("TEST_SHIFT_MILLIS").unwrap_or_else(|_| "10".to_string()));
static MAX_BYTES_STR: Lazy<String> =
Lazy::new(|| env::var("TEST_MAX_BYTES").unwrap_or_else(|_| "1000000".to_string()));
static SHIFT_MILLIS: Lazy<u8> = Lazy::new(|| u8::from_str(SHIFT_MILLIS_STR.as_str()).unwrap_or(10));
static MAX_BYTES: Lazy<usize> =
Lazy::new(|| usize::from_str(MAX_BYTES_STR.as_str()).unwrap_or(1_000_000));
async fn process_socket(mut clt_stream: TcpStream) -> io::Result<()> {
let mut ups_stream = TcpStream::connect(CONNECT_ADDR.as_str()).await?;
println!("new connected task");
let (clt_r, clt_w) = clt_stream.split();
let (ups_r, ups_w) = ups_stream.split();
let task_stats = Arc::new(TaskStats::new());
let (clt_r_stats, clt_w_stats) = CltStats::new_pair(Arc::clone(&task_stats));
let mut clt_r = LimitedReader::new(clt_r, *SHIFT_MILLIS, *MAX_BYTES, clt_r_stats);
let mut clt_w = LimitedWriter::new(clt_w, *SHIFT_MILLIS, *MAX_BYTES, clt_w_stats);
let (ups_r_stats, ups_w_stats) = UpsStats::new_pair(Arc::clone(&task_stats));
let mut ups_r = LimitedReader::new(ups_r, *SHIFT_MILLIS, *MAX_BYTES, ups_r_stats);
let mut ups_w = LimitedWriter::new(ups_w, *SHIFT_MILLIS, *MAX_BYTES, ups_w_stats);
let clt_to_ups = tokio::io::copy(&mut clt_r, &mut ups_w);
let ups_to_clt = tokio::io::copy(&mut ups_r, &mut clt_w);
try_join(clt_to_ups, ups_to_clt).await?;
Ok(())
}
#[tokio::main]
async fn main() -> io::Result<()> {
let listener = TcpListener::bind(LISTEN_ADDR.as_str()).await?;
loop {
let (stream, _) = listener.accept().await?;
tokio::spawn(async move {
if let Err(e) = process_socket(stream).await {
println!("process_socket: {e}");
}
});
}
}

View file

@ -0,0 +1,135 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::Arc;
use g3_io_ext::{
ArcLimitedReaderStats, ArcLimitedWriterStats, LimitedReaderStats, LimitedWriterStats,
};
#[derive(Debug)]
struct HalfConnectionStats {
bytes: u64,
#[allow(unused)]
delay: u64,
}
impl HalfConnectionStats {
fn new() -> Self {
HalfConnectionStats { bytes: 0, delay: 0 }
}
fn add_bytes(&self, size: u64) {
unsafe {
let r = &self.bytes as *const u64 as *mut u64;
*r += size;
}
}
}
#[derive(Debug)]
struct ConnectionStats {
read: HalfConnectionStats,
write: HalfConnectionStats,
}
impl ConnectionStats {
fn new() -> Self {
ConnectionStats {
read: HalfConnectionStats::new(),
write: HalfConnectionStats::new(),
}
}
}
#[derive(Debug)]
pub struct TaskStats {
clt: ConnectionStats,
ups: ConnectionStats,
}
impl TaskStats {
pub fn new() -> Self {
TaskStats {
clt: ConnectionStats::new(),
ups: ConnectionStats::new(),
}
}
fn print(&self) {
println!("{self:?}");
}
}
impl Default for TaskStats {
fn default() -> Self {
Self::new()
}
}
impl Drop for TaskStats {
fn drop(&mut self) {
self.print()
}
}
#[derive(Clone)]
pub struct CltStats {
task: Arc<TaskStats>,
}
impl CltStats {
pub fn new_pair(task: Arc<TaskStats>) -> (ArcLimitedReaderStats, ArcLimitedWriterStats) {
let s = CltStats { task };
(Arc::new(s.clone()), Arc::new(s))
}
}
impl LimitedReaderStats for CltStats {
fn add_read_bytes(&self, size: usize) {
self.task.clt.read.add_bytes(size as u64);
}
}
impl LimitedWriterStats for CltStats {
fn add_write_bytes(&self, size: usize) {
self.task.clt.write.add_bytes(size as u64);
}
}
#[derive(Clone)]
pub struct UpsStats {
task: Arc<TaskStats>,
}
impl UpsStats {
pub fn new_pair(task: Arc<TaskStats>) -> (ArcLimitedReaderStats, ArcLimitedWriterStats) {
let s = UpsStats { task };
(Arc::new(s.clone()), Arc::new(s))
}
}
impl LimitedReaderStats for UpsStats {
fn add_read_bytes(&self, size: usize) {
self.task.ups.read.add_bytes(size as u64);
}
}
impl LimitedWriterStats for UpsStats {
fn add_write_bytes(&self, size: usize) {
self.task.ups.write.add_bytes(size as u64);
}
}

68
doc/code_coverage.md Normal file
View file

@ -0,0 +1,68 @@
Code Coverage
-----
Source based code coverage is available since rust version 1.60.
# Compilation
The following RUSTFLAGS should be set before compiling:
Shell:
```shell
export RUSTFLAGS="-C instrument-coverage"
```
Fish:
```fish
set -x RUSTFLAGS "-C instrument-coverage"
```
Before running tests, the
[LLVM_PROFILE_FILE](https://clang.llvm.org/docs/SourceBasedCodeCoverage.html#running-the-instrumented-program)
can be used to set name of the generated profraw files:
Shell:
```shell
export LLVM_PROFILE_FILE "test-%p-%m.profraw"
```
Fish:
```fish
set -x LLVM_PROFILE_FILE "test-%p-%m.profraw"
```
# Parse and Report
LLVM coverage tools are needed to process coverage data and generate reports.
## Independent llvm-tools
[llvm-profdata](https://llvm.org/docs/CommandGuide/llvm-profdata.html)
is needed to merge all raw profile data files into indexed profile data file:
Shell:
```shell
llvm-profdata merge -o a.profdata $(find . -type f -name "*profraw" -exec ls \{\} \;)
```
Fish:
```fish
llvm-profdata merge -o a.profdata (find . -type f -name "*profraw" -exec ls \{\} \;)
```
[llvm-cov](https://llvm.org/docs/CommandGuide/llvm-cov.html) is needed to generate reports:
```shell
llvm-cov report --instr-profile=a.profdata --ignore-filename-regex=".cargo" --ignore-filename-regex="rustc" -object <BIN/OBJ>[ -object <BIN/OBJ>]...
```
## Bundled llvm-tools-preview
The llvm-tools you installed may be not compatible with the ones rustc used. You can use llvm-tools-preview via rustup:
```shell
rustup component add llvm-tools-preview
cargo install cargo-binutils
```
To run the bundled tools, just use ```cargo <cmd> -- <params>```.

202
doc/dev-setup.md Normal file
View file

@ -0,0 +1,202 @@
Dev-Setup
-----
# Toolchain
## Install rustup
See [rustup.rs](https://rustup.rs/) to install **rustup**.
It is recommended to use a non-root user.
*cargo*, *rustc*, *rustup* and other commands will be installed to Cargo's bin directory.
The default path is $HOME/.cargo/bin, and the following examples will use this.
You need to add this directory to your PATH environment variable.
- Bash
The setup script should have already added the following line to your $HOME/.profile:
```shell script
source "$HOME/.cargo/env"
```
- Fish
Run the following command:
```shell script
set -U fish_user_paths $HOME/.cargo/bin $fish_user_paths
```
## Update rustup
```shell script
rustup self update
```
## Install stable toolchains
List all available components:
```shell
rustup component list
```
The following components is required and should have already been installed:
- rustc
- rust-std
- cargo
- rustfmt
- clippy
**llvm-tools-preview** and **rust-src** is also recommended being installed:
```shell script
rustup component add llvm-tools-preview
rustup component add rust-src
```
## Install nightly toolchains
Install nightly toolchains:
```shell script
rustup toolchain install nightly
```
List components in nightly channel:
```shell script
rustup component list --toolchain nightly
```
## Update toolchains
Run the following command to update the toolchains for all channel:
```shell script
rustup update
```
# Plugins for cargo
To install:
```shell script
cargo install <crate name>
```
To update:
```shell script
cargo install -f <crate name>
```
The following plugins is recommended:
- cargo-expand
Needed by IDE(at least JetBrains' rust plugin) to expand macros.
The nightly toolchain is also required to run this.
- cargo-outdated
Useful if you want to find out the outdated dependencies in your Cargo.toml.
- cargo-audit
Audit Cargo.lock for crates with security vulnerabilities.
- cargo-license
To see license of dependencies.
- cargo-binutils
To run llvm-tools-preview installed via rustup.
# IDE
## JetBrains
There is an official [rust plugin](https://plugins.jetbrains.com/plugin/8182-rust) for JetBrains IDEs.
**PyCharm Community Edition** is recommended as we also use Python scripts in this repo.
**Clion** is needed if you want the **DEBUG** feature.
# Dependent Tools and Libraries
## Development Libraries
For *g3proxy*:
```text
c-ares
lua
python3
```
## Development Tools
The tools for C development should be installed, including but not limited to:
```text
gcc
pkg-config
```
If the c-ares version in the OS repo is too old, the following tools is also required:
```text
libtool
make
```
## Rpc Code Generator
We use capnproto rpc in *g3proxy*:
```text
capnproto
```
## Testing Tools
The following tools are needed to run testing scripts:
```text
llvm
mkcert
curl
```
## Scripting Tools
The following tools are used in scripts under directory *scripts/*:
```text
git
jq
tar
xz
```
## Scripting Libraries
We use python3 for more complicated scripts, the following packages are needed:
```text
toml
requests
PySocks
dnspython
```
## Document Tools
We use [sphinx](https://www.sphinx-doc.org/en/master/) to generate docs.
## Packaging Tools
### deb
For all *Debian* based distributions:
```text
lsb-release
devscripts
dpkg-dev
debhelper
```
### rpm
For all *rhel* based distributions:
```text
rpmdevtools
rpm-build
```

309
doc/standards.md Normal file
View file

@ -0,0 +1,309 @@
Standards
---------
This file contains all the standards we have draw attention to during the development.
The code should comply to these, but should be more compliant to existing popular implementations.
# General
## URI
- [rfc3986](https://datatracker.ietf.org/doc/html/rfc3986)
: Uniform Resource Identifier (URI): Generic Syntax
- [URL](https://url.spec.whatwg.org/)
: Living Standard
- [rfc1738](https://datatracker.ietf.org/doc/html/rfc1738)
: Uniform Resource Locators (URL)
## Prefixes for Binary Multiples
- [IEEE 1541-2002](https://en.wikipedia.org/wiki/IEEE_1541-2002)
: IEEE Standard for Prefixes for Binary Multiples
## Date and Time
- [rfc3339](https://datatracker.ietf.org/doc/html/rfc3339)
: Date and Time on the Internet: Timestamps
## UUID
- [rfc4122](https://datatracker.ietf.org/doc/html/rfc4122)
: A Universally Unique IDentifier (UUID) URN Namespace
## JSON-RPC
- [JSON-RPC](https://www.simple-is-better.org/json-rpc/)
: An up-to-date summary of all relevant information about JSON-RPC
- [JSON-RPC 2.0](https://www.jsonrpc.org/specification)
: JSON-RPC 2.0 Specification
- [JSON-RPC 2.0 Transport: HTTP](https://www.simple-is-better.org/json-rpc/transport_http.html)
- [JSON-RPC 2.0 Transport: Sockets](https://www.simple-is-better.org/json-rpc/transport_sockets.html)
## Encoding
- [netstring](http://cr.yp.to/proto/netstrings.txt)
## Syslog
- [rfc3164](https://datatracker.ietf.org/doc/html/rfc3164)
: The BSD syslog Protocol
- [rfc5424](https://datatracker.ietf.org/doc/html/rfc5424)
: The Syslog Protocol
- [CEE Log Syntax](https://cee.mitre.org/language/1.0-beta1/cls.html)
: CEE Log Syntax (CLS) Specification
- [CEE Log Transport](https://cee.mitre.org/language/1.0-beta1/clt.html)
: CEE Log Transport (CLT) Specification
## Fluentd
- [Forward-Protocol-Specification-v1](https://github.com/fluent/fluentd/wiki/Forward-Protocol-Specification-v1)
: Forward Protocol Specification v1
## PEN
- [PRIVATE ENTERPRISE NUMBERS](https://www.iana.org/assignments/enterprise-numbers/enterprise-numbers)
## IP Address
- [rfc6890](https://datatracker.ietf.org/doc/html/rfc6890)
: Special-Purpose IP Address Registries
- [rfc4291](https://datatracker.ietf.org/doc/html/rfc4291)
: IP Version 6 Addressing Architecture
- [rfc8215](https://datatracker.ietf.org/doc/html/rfc8215)
: Local-Use IPv4/IPv6 Translation Prefix
## Semantic Versioning
- [semver](https://semver.org/)
: Semantic Versioning 2.0.0
## X.509
- [rfc7468](https://datatracker.ietf.org/doc/html/rfc7468)
: Textual Encodings of PKIX, PKCS, and CMS Structures
# Network Protocol
## Happy Eyeballs
- [rfc8305](https://datatracker.ietf.org/doc/html/rfc8305)
: Happy Eyeballs Version 2: Better Connectivity Using Concurrency
## PROXY protocol
- [haproxy-proxy-protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)
: The PROXY protocol Versions 1 & 2
## Socks
### Socks4
- [SOCKS4](http://ftp.icm.edu.pl/packages/socks/socks4/SOCKS4.protocol)
: SOCKS: A protocol for TCP proxy across firewalls
- [SOCKS4a](https://www.openssh.com/txt/socks4a.protocol)
: SOCKS 4A: A Simple Extension to SOCKS 4 Protocol
### Socks5
- [rfc1928](https://datatracker.ietf.org/doc/html/rfc1928)
: SOCKS Protocol Version 5
- [rfc1929](https://datatracker.ietf.org/doc/html/rfc1929)
: Username/Password Authentication for SOCKS V5
- [rfc1961](https://datatracker.ietf.org/doc/html/rfc1961)
: GSS-API Authentication Method for SOCKS Version 5
- [draft-ietf-aft-socks-chap-01](https://datatracker.ietf.org/doc/html/draft-ietf-aft-socks-chap-01)
: Challenge-Handshake Authentication Protocol for SOCKS V5
### Socks6
- [draft-olteanu-intarea-socks-6-11](https://datatracker.ietf.org/doc/html/draft-olteanu-intarea-socks-6-11)
: SOCKS Protocol Version 6
## DNS
- [rfc2181](https://datatracker.ietf.org/doc/html/rfc2181)
: Clarifications to the DNS Specification
- [rfc4343](https://datatracker.ietf.org/doc/html/rfc4343)
: Domain Name System (DNS) Case Insensitivity Clarification
- [draft-madi-dnsop-udp4dns-00](https://datatracker.ietf.org/doc/id/draft-madi-dnsop-udp4dns-00.html)
: UDP payload size for DNS messages
- [rfc5625](https://datatracker.ietf.org/doc/html/rfc5625)
: DNS Proxy Implementation Guidelines
- [rfc5891](https://datatracker.ietf.org/doc/html/rfc5891)
: Internationalized Domain Names in Applications (IDNA): Protocol
- [rfc6891](https://datatracker.ietf.org/doc/html/rfc6891)
: Extension Mechanisms for DNS (EDNS(0))
- [rfc6761](https://datatracker.ietf.org/doc/html/rfc6761)
: Special-Use Domain Names
- [rfc7858](https://datatracker.ietf.org/doc/html/rfc7858)
: Specification for DNS over Transport Layer Security (TLS)
- [rfc8484](https://datatracker.ietf.org/doc/html/rfc8484)
: DNS Queries over HTTPS (DoH)
- [iana-domains-reserved](https://www.iana.org/domains/reserved)
: IANA-managed Reserved Domains
## SSH
- [rfc4253](https://datatracker.ietf.org/doc/html/rfc4253)
: The Secure Shell (SSH) Transport Layer Protocol
## TLS
- [rfc8446](https://datatracker.ietf.org/doc/html/rfc8446)
: The Transport Layer Security (TLS) Protocol Version 1.3
- [GB/T 38636-2020](https://openstd.samr.gov.cn/bzgk/gb/newGbInfo?hcno=778097598DA2761E94A5FF3F77BD66DA)
: Information security technology—Transport layer cryptography protocol(TLCP)
## HTTP
- [rfc9110](https://datatracker.ietf.org/doc/html/rfc9110)
: HTTP Semantics
- [rfc9111](https://datatracker.ietf.org/doc/html/rfc9111)
: HTTP Caching
- [mozilla-http](https://developer.mozilla.org/en-US/docs/Web/HTTP)
: Web technology for developers - HTTP
- [rfc7617](https://datatracker.ietf.org/doc/html/rfc7617)
: The 'Basic' HTTP Authentication Scheme
- [rfc7239](https://datatracker.ietf.org/doc/html/rfc7239)
: Forwarded HTTP Extension
- [iana-http-methods](https://www.iana.org/assignments/http-methods)
: Hypertext Transfer Protocol (HTTP) Method Registry
- [iana-http-status-codes](https://www.iana.org/assignments/http-status-codes/http-status-codes)
: Hypertext Transfer Protocol (HTTP) Status Code Registry
- [mozilla-http-headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers)
: HTTP headers
- [rfc6648](https://datatracker.ietf.org/doc/html/rfc6648)
: Deprecating the "X-" Prefix and Similar Constructs in Application Protocols
- [rfc9297](https://datatracker.ietf.org/doc/html/rfc9297)
: HTTP Datagrams and the Capsule Protocol
- [rfc9298](https://datatracker.ietf.org/doc/html/rfc9298)
: Proxying UDP in HTTP
- [iana-http-upgrade-tokens](https://www.iana.org/assignments/http-upgrade-tokens/http-upgrade-tokens.xhtml)
: Hypertext Transfer Protocol (HTTP) Upgrade Token Registry
- [iana-well-known-uris](https://www.iana.org/assignments/well-known-uris/well-known-uris.xhtml)
: Well-Known URIs
### HTTP/1.0
- [rfc1945](https://datatracker.ietf.org/doc/html/rfc1945)
: Hypertext Transfer Protocol -- HTTP/1.0
### HTTP/1.1
- [rfc9112](https://datatracker.ietf.org/doc/html/rfc9112)
: HTTP/1.1
### Http/2
- [rfc9113](https://datatracker.ietf.org/doc/html/rfc9113)
: HTTP/2
### Http/3
- [rfc9114](https://datatracker.ietf.org/doc/html/rfc9114)
: HTTP/3
### Websocket
- [rfc6455](https://datatracker.ietf.org/doc/html/rfc6455)
: The WebSocket Protocol
- [rfc8441](https://datatracker.ietf.org/doc/html/rfc8441)
: Bootstrapping WebSockets with HTTP/2
- [rfc9220](https://datatracker.ietf.org/doc/html/rfc9220)
: Bootstrapping WebSockets with HTTP/3
- [nginx-websocket-proxying](https://nginx.org/en/docs/http/websocket.html)
: WebSocket proxying
### FTP
- [rfc959](https://datatracker.ietf.org/doc/html/rfc959)
: FILE TRANSFER PROTOCOL (FTP)
- [rfc1639](https://datatracker.ietf.org/doc/html/rfc1639)
: FTP Operation Over Big Address Records (FOOBAR)
- [rfc2389](https://datatracker.ietf.org/doc/html/rfc2389)
: Feature negotiation mechanism for the File Transfer Protocol
- [rfc2428](https://datatracker.ietf.org/doc/html/rfc2428)
: FTP Extensions for IPv6 and NATs
- [rfc2640](https://datatracker.ietf.org/doc/html/rfc2640)
: Internationalization of the File Transfer Protocol
- [rfc3659](https://datatracker.ietf.org/doc/html/rfc3659)
: Extensions to FTP
- [rfc7151](https://datatracker.ietf.org/doc/html/rfc7151)
: File Transfer Protocol HOST Command for Virtual Hosts
- [iana-ftp-commands-extensions](https://www.iana.org/assignments/ftp-commands-extensions/ftp-commands-extensions.xhtml)
: FTP Commands and Extensions
- [draft-ietf-ftpext-utf-8-option-00](https://datatracker.ietf.org/doc/html/draft-ietf-ftpext-utf-8-option-00)
: UTF-8 Option for FTP
- [draft-ietf-ftpext-data-connection-assurance](https://datatracker.ietf.org/doc/html/draft-ietf-ftpext-data-connection-assurance)
: FTP Data Connection Assurance
- [draft-dd-pret-00](https://datatracker.ietf.org/doc/html/draft-dd-pret-00)
: Distributed Transfer Support for FTP
- [draft-rosenau-ftp-single-port-05](https://datatracker.ietf.org/doc/html/draft-rosenau-ftp-single-port-05)
: FTP EXTENSION ALLOWING IP FORWARDING (NATs)
### SMTP
- [rfc5321](https://datatracker.ietf.org/doc/html/rfc5321)
: Simple Mail Transfer Protocol
### POP3
- [rfc1939](https://datatracker.ietf.org/doc/html/rfc1939)
: Post Office Protocol - Version 3
### IMAP
- [rfc3501](https://datatracker.ietf.org/doc/html/rfc3501)
: INTERNET MESSAGE ACCESS PROTOCOL - VERSION 4rev1
- [rfc7162](https://datatracker.ietf.org/doc/html/rfc7162)
: IMAP Extensions: Quick Flag Changes Resynchronization (CONDSTORE) and Quick Mailbox Resynchronization (QRESYNC)
### NNTP
- [rfc3977](https://datatracker.ietf.org/doc/html/rfc3977)
: Network News Transfer Protocol (NNTP)
- [rfc8143](https://datatracker.ietf.org/doc/html/rfc8143)
: Using Transport Layer Security (TLS) with Network News Transfer Protocol (NNTP)
### MQTT
- [mqtt-v5.0-os](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html)
: MQTT Version 5.0 OASIS Standard
- [mqtt-v3.1.1-os](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html)
: MQTT Version 3.1.1 OASIS Standard
### RTMP
- [rtmp_specification_1.0](https://rtmp.veriskope.com/docs/spec/)
: Adobe RTMP Specification
### RTSP/2.0
- [rfc7826](https://datatracker.ietf.org/doc/html/rfc7826)
: Real-Time Streaming Protocol Version 2.0
### BitTorrent
- [bep_0003](http://bittorrent.org/beps/bep_0003.html)
: The BitTorrent Protocol Specification
### ICAP
- [rfc3507](https://datatracker.ietf.org/doc/html/rfc3507)
: Internet Content Adaptation Protocol (ICAP)
- [draft-icap-ext-partial-content-07](http://www.icap-forum.org/documents/specification/draft-icap-ext-partial-content-07.txt)
: ICAP Partial Content Extension
### WCCP
- [draft-wilson-wrec-wccp-v2-01](https://datatracker.ietf.org/doc/html/draft-wilson-wrec-wccp-v2-01)
: Web Cache Communication Protocol V2.0
### NAT Traversal
- [rfc8489](https://datatracker.ietf.org/doc/html/rfc8489)
: Session Traversal Utilities for NAT (STUN)
- [rfc8656](https://datatracker.ietf.org/doc/html/rfc8656)
: Traversal Using Relays around NAT (TURN): Relay Extensions to Session Traversal Utilities for NAT (STUN)
- [rfc8445](https://datatracker.ietf.org/doc/html/rfc8445)
: Interactive Connectivity Establishment (ICE): A Protocol for Network Address Translator (NAT) Traversal

74
g3bench/CHANGELOG Normal file
View file

@ -0,0 +1,74 @@
v0.6.1:
- Feature: add --no-multiplex option to h2 target
v0.6.0:
- Feature: add new ssl test target
- Feature: add config option to control connect timeout
- Feature: resolve domain in early stage and allow to set pick policy
- BUG FIX: fix the use of local address specified in args
v0.5.6:
- BUG FIX: really use h1 & h2 timeout config option
- Feature: allow to disable TLS SNI
- Feature: use http prefix for h1 & h2 metrics and add 'target' tag
v0.5.5:
- Feature: add more tls config options for h1 and h2 target
v0.5.4:
- Optimization: don't wait for h1 connection shutdown, and add shutdown error stats
v0.5.3:
- Feature: allow to use unaided workers
- Feature: allow to disable TLS session cache when handshake with target site
v0.5.2:
- Feature: add --resolve global option to set resolve redirection
v0.5.1:
- BUG FIX: fix command line handling
v0.5.0:
- Feature: add new h2 test target
v0.4.4:
- Optimization: use batch update of progress bar
v0.4.3:
- Feature: add connection usage summary to h1 target
v0.4.2:
- Feature: add requests distribution summary
- BUG FIX: fix traffic read summary
v0.4.1:
- Feature: add connection stats
- BUG FIX: fix traffic summary
v0.4.0:
- Feature: allow to set time limit
- Feature: do graceful quit at Ctrl-C
- Feature: summary io stats in final report
v0.3.1:
- BUG FIX: fix the meaning of --proxy-tunnel
v0.3.0:
- Feature: allow to disable progress bar
- Optimization: h1: use the same proxy args as curl
v0.2.0:
- Feature: h1 target: allow to emit histogram stats
- Feature: add pid tag to metrics
- Optimization: also use hdrhistogram for final report
v0.1.2:
- BUG FIX: fix tls connect for h1 target when using CONNECT proxy
- Feature: support set tcp rate limit config
v0.1.1:
- BUG FIX: fix tls connect for h1 target
v0.1.0:
- Initial release

41
g3bench/Cargo.toml Normal file
View file

@ -0,0 +1,41 @@
[package]
name = "g3bench"
version = "0.6.1"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
anyhow = "1.0"
once_cell = "1.7"
clap = "4.0"
clap_complete = "4.0"
async-trait = "0.1"
indicatif = "0.17"
tokio = { version = "1.0", features = ["rt", "net", "macros"] }
http = "0.2"
url = "2.1"
h2 = "0.3"
bytes = "1.0"
futures-util = "0.3"
tokio-openssl = "0.6"
openssl = "0.10"
cadence = { package = "cadence-with-flush", version = "0.29" }
hdrhistogram = "7.5"
ahash = "0.8"
g3-runtime = { path = "../lib/g3-runtime" }
g3-signal = { path = "../lib/g3-signal" }
g3-types = { path = "../lib/g3-types", features = ["proxy"] }
g3-clap = { path = "../lib/g3-clap" }
g3-socket = { path = "../lib/g3-socket" }
g3-http = { path = "../lib/g3-http" }
g3-socks = { path = "../lib/g3-socks" }
g3-io-ext = { path = "../lib/g3-io-ext" }
g3-statsd = { path = "../lib/g3-statsd" }
[build-dependencies]
rustc_version = "0.4"
[features]
default = []
vendored-openssl = ["openssl/vendored"]

51
g3bench/build.rs Normal file
View file

@ -0,0 +1,51 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::env;
fn main() {
let rustc = rustc_version::version_meta().unwrap();
println!(
"cargo:rustc-env=G3_BUILD_RUSTC_VERSION={}",
rustc.short_version_string
);
println!("cargo:rustc-env=G3_BUILD_RUSTC_CHANNEL={:?}", rustc.channel);
println!(
"cargo:rustc-env=G3_BUILD_HOST={}",
env::var("HOST").unwrap()
);
println!(
"cargo:rustc-env=G3_BUILD_TARGET={}",
env::var("TARGET").unwrap()
);
println!(
"cargo:rustc-env=G3_BUILD_PROFILE={}",
env::var("PROFILE").unwrap()
);
println!(
"cargo:rustc-env=G3_BUILD_OPT_LEVEL={}",
env::var("OPT_LEVEL").unwrap()
);
println!(
"cargo:rustc-env=G3_BUILD_DEBUG={}",
env::var("DEBUG").unwrap()
);
if let Ok(v) = env::var("G3_PACKAGE_VERSION") {
println!("cargo:rustc-env=G3_PACKAGE_VERSION={v}");
}
}

5
g3bench/debian/changelog Normal file
View file

@ -0,0 +1,5 @@
g3bench (0.6.1-1) UNRELEASED; urgency=medium
* New upstream release.
-- G3bench Maintainers <g3bench-maintainers@devel.machine> Thu, 09 Mar 2023 17:49:04 +0800

1
g3bench/debian/compat Normal file
View file

@ -0,0 +1 @@
10

12
g3bench/debian/control Normal file
View file

@ -0,0 +1,12 @@
Source: g3bench
Section: net
Priority: optional
Maintainer: G3bench Maintainers <g3bench-maintainers@devel.machine>
Build-Depends: debhelper
Standards-Version: 3.9.8
Package: g3bench
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Recommends: ca-certificates
Description: Benchmark tool for G3 Project

View file

@ -0,0 +1 @@
usr/bin/g3bench

27
g3bench/debian/rules Executable file
View file

@ -0,0 +1,27 @@
#!/usr/bin/make -f
PACKAGE_NAME := g3bench
BUILD_PROFILE := release-lto
DEB_VERSION ?= $(shell dpkg-parsechangelog -SVersion)
SSL_FEATURE ?= $(shell scripts/package/detect_openssl_feature.sh)
%:
dh $@
override_dh_auto_clean:
cargo clean --frozen --offline --release --package g3bench
override_dh_auto_build:
G3_PACKAGE_VERSION=$(DEB_VERSION) \
cargo build --frozen --offline --profile $(BUILD_PROFILE) \
--no-default-features --features $(SSL_FEATURE), \
--package g3bench
override_dh_auto_install:
dh_auto_install
install -m 755 -D target/$(BUILD_PROFILE)/g3bench debian/tmp/usr/bin/g3bench
override_dh_installchangelogs:
dh_installchangelogs $(PACKAGE_NAME)/CHANGELOG

View file

@ -0,0 +1 @@
3.0 (quilt)

49
g3bench/g3bench.spec Normal file
View file

@ -0,0 +1,49 @@
%if 0%{?rhel} > 7
%undefine _debugsource_packages
%endif
%if 0%{?rhel} == 7
%global debug_package %{nil}
%endif
%define build_profile release-lto
Name: g3bench
Version: 0.6.1
Release: 1%{?dist}
Summary: Benchmark tool for G3 Project
License: Unspecified
#URL:
Source0: %{name}-%{version}.tar.xz
Requires: ca-certificates
%description
Benchmark tool for G3 Project
%prep
%autosetup
%build
G3_PACKAGE_VERSION="%{version}-%{release}"
export G3_PACKAGE_VERSION
SSL_FEATURE=$(pkg-config --atleast-version 1.1.1 libssl || echo "vendored-openssl")
cargo build --frozen --offline --profile %{build_profile} --no-default-features --features $SSL_FEATURE, --package g3bench
%install
rm -rf $RPM_BUILD_ROOT
install -m 755 -D target/%{build_profile}/g3bench %{buildroot}%{_bindir}/g3bench
%files
#%license add-license-file-here
%{_bindir}/g3bench
%changelog
* Thu Mar 09 2023 G3bench Maintainers <g3bench-maintainers@devel.machine> - 0.6.1-1
- New upstream release

39
g3bench/src/build.rs Normal file
View file

@ -0,0 +1,39 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
pub const VERSION: &str = env!("CARGO_PKG_VERSION");
pub const PKG_NAME: &str = env!("CARGO_PKG_NAME");
const RUSTC_VERSION: &str = env!("G3_BUILD_RUSTC_VERSION");
const RUSTC_CHANNEL: &str = env!("G3_BUILD_RUSTC_CHANNEL");
const BUILD_HOST: &str = env!("G3_BUILD_HOST");
const BUILD_TARGET: &str = env!("G3_BUILD_TARGET");
const BUILD_PROFILE: &str = env!("G3_BUILD_PROFILE");
const BUILD_OPT_LEVEL: &str = env!("G3_BUILD_OPT_LEVEL");
const BUILD_DEBUG: &str = env!("G3_BUILD_DEBUG");
const PACKAGE_VERSION: Option<&str> = option_env!("G3_PACKAGE_VERSION");
pub fn print_version() {
println!("{PKG_NAME} {VERSION}");
println!("Compiler: {RUSTC_VERSION} ({RUSTC_CHANNEL})");
println!("Host: {BUILD_HOST}, Target: {BUILD_TARGET}");
println!("Profile: {BUILD_PROFILE}, Opt Level: {BUILD_OPT_LEVEL}, Debug: {BUILD_DEBUG}");
if let Some(package_version) = PACKAGE_VERSION {
println!("Package Version: {package_version}");
}
}

23
g3bench/src/lib.rs Normal file
View file

@ -0,0 +1,23 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
mod opts;
pub mod build;
pub mod target;
pub mod worker;
pub use opts::{add_global_args, parse_global_args, ProcArgs};

98
g3bench/src/main.rs Normal file
View file

@ -0,0 +1,98 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::io;
use std::sync::Arc;
use anyhow::{anyhow, Context};
use clap::{value_parser, Arg, ArgMatches, Command};
use clap_complete::Shell;
const COMMAND_VERSION: &str = "version";
const COMMAND_COMPLETION: &str = "completion";
fn build_cli_args() -> Command {
g3bench::add_global_args(Command::new("g3bench"))
.subcommand(Command::new(COMMAND_VERSION).override_help("Show version"))
.subcommand(
Command::new(COMMAND_COMPLETION).arg(
Arg::new("target")
.value_name("SHELL")
.required(true)
.num_args(1)
.value_parser(value_parser!(Shell)),
),
)
.subcommand(g3bench::target::h1::command())
.subcommand(g3bench::target::h2::command())
.subcommand(g3bench::target::ssl::command())
}
fn main() -> anyhow::Result<()> {
openssl::init();
let args = build_cli_args().get_matches();
let proc_args = g3bench::parse_global_args(&args)?;
let proc_args = Arc::new(proc_args);
let (subcommand, sub_args) = args
.subcommand()
.ok_or_else(|| anyhow!("no subcommand found"))?;
match subcommand {
COMMAND_VERSION => {
g3bench::build::print_version();
return Ok(());
}
COMMAND_COMPLETION => {
generate_completion(sub_args);
return Ok(());
}
_ => {}
}
proc_args.summary();
let rt = proc_args
.main_runtime()
.start()
.context("failed to start main runtime")?;
rt.block_on(async move {
let _worker_guard = if let Some(worker_config) = proc_args.worker_runtime() {
let guard = g3bench::worker::spawn_workers(&worker_config)
.await
.context("failed to start workers")?;
Some(guard)
} else {
None
};
match subcommand {
g3bench::target::h1::COMMAND => g3bench::target::h1::run(&proc_args, sub_args).await,
g3bench::target::h2::COMMAND => g3bench::target::h2::run(&proc_args, sub_args).await,
g3bench::target::ssl::COMMAND => g3bench::target::ssl::run(&proc_args, sub_args).await,
cmd => Err(anyhow!("invalid subcommand {}", cmd)),
}
})
}
fn generate_completion(args: &ArgMatches) {
if let Some(target) = args.get_one::<Shell>("target") {
let mut app = build_cli_args();
let bin_name = app.get_name().to_string();
clap_complete::generate(*target, &mut app, bin_name, &mut io::stdout());
}
}

442
g3bench/src/opts.rs Normal file
View file

@ -0,0 +1,442 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::net::{IpAddr, SocketAddr};
use std::path::PathBuf;
use std::str::FromStr;
use std::time::{Duration, Instant};
use ahash::AHashMap;
use anyhow::{anyhow, Context};
use cadence::StatsdClient;
use clap::{value_parser, Arg, ArgAction, ArgMatches, Command, ValueHint};
use indicatif::{ProgressBar, ProgressStyle};
use g3_runtime::blended::BlendedRuntimeConfig;
use g3_runtime::unaided::UnaidedRuntimeConfig;
use g3_statsd::client::{StatsdBackend, StatsdClientConfig};
use g3_types::collection::{SelectivePickPolicy, SelectiveVec, SelectiveVecBuilder, WeightedValue};
use g3_types::metrics::MetricsName;
use g3_types::net::{TcpSockSpeedLimitConfig, UpstreamAddr};
const GLOBAL_ARG_UNAIDED: &str = "unaided";
const GLOBAL_ARG_UNCONSTRAINED: &str = "unconstrained";
const GLOBAL_ARG_THREADS: &str = "threads";
const GLOBAL_ARG_THREAD_STACK_SIZE: &str = "thread-stack-size";
const GLOBAL_ARG_CONCURRENCY: &str = "concurrency";
const GLOBAL_ARG_TIME_LIMIT: &str = "time-limit";
const GLOBAL_ARG_REQUESTS: &str = "requests";
const GLOBAL_ARG_RESOLVE: &str = "resolve";
const GLOBAL_ARG_LOG_ERROR: &str = "log-error";
const GLOBAL_ARG_EMIT_METRICS: &str = "emit-metrics";
const GLOBAL_ARG_STATSD_TARGET_UDP: &str = "statsd-target-udp";
const GLOBAL_ARG_STATSD_TARGET_UNIX: &str = "statsd-target-unix";
const GLOBAL_ARG_NO_PROGRESS_BAR: &str = "no-progress-bar";
const GLOBAL_ARG_PEER_PICK_POLICY: &str = "peer-pick-policy";
const GLOBAL_ARG_TCP_LIMIT_SHIFT: &str = "tcp-limit-shift";
const GLOBAL_ARG_TCP_LIMIT_BYTES: &str = "tcp-limit-bytes";
const DEFAULT_STAT_PREFIX: &str = "g3bench";
pub struct ProcArgs {
pub(super) concurrency: usize,
pub(super) requests: Option<usize>,
pub(super) time_limit: Option<Duration>,
pub(super) log_error_count: usize,
pub(super) task_unconstrained: bool,
resolver: AHashMap<UpstreamAddr, IpAddr>,
use_unaided_worker: bool,
thread_number: Option<usize>,
thread_stack_size: Option<usize>,
statsd_client_config: Option<StatsdClientConfig>,
no_progress_bar: bool,
peer_pick_policy: SelectivePickPolicy,
pub(super) tcp_sock_speed_limit: TcpSockSpeedLimitConfig,
}
impl Default for ProcArgs {
fn default() -> Self {
ProcArgs {
concurrency: 1,
requests: None,
time_limit: None,
log_error_count: 0,
task_unconstrained: false,
resolver: AHashMap::new(),
use_unaided_worker: false,
thread_number: None,
thread_stack_size: None,
statsd_client_config: None,
no_progress_bar: false,
peer_pick_policy: SelectivePickPolicy::RoundRobin,
tcp_sock_speed_limit: TcpSockSpeedLimitConfig::default(),
}
}
}
impl ProcArgs {
pub fn summary(&self) {
println!("Concurrency Level: {}", self.concurrency);
println!();
}
pub(super) fn new_progress_bar(&self) -> Option<ProgressBar> {
if self.no_progress_bar {
None
} else if let Some(requests) = self.requests {
let bar = ProgressBar::new(requests as u64).with_style(
ProgressStyle::default_bar()
.progress_chars("=>-")
.template("[{elapsed_precise}] {wide_bar} {pos}/{len}")
.unwrap(),
);
Some(bar)
} else {
None
}
}
pub(super) fn new_statsd_client(&self) -> Option<(StatsdClient, Duration)> {
if let Some(config) = &self.statsd_client_config {
match config.build() {
Ok(builder) => {
let start_instant = Instant::now();
let client = builder
.with_error_handler(move |e| {
static mut LAST_REPORT_TIME_SLICE: u64 = 0;
let time_slice = start_instant.elapsed().as_secs().rotate_right(6); // every 64s
unsafe {
if LAST_REPORT_TIME_SLICE != time_slice {
eprintln!("sending metrics error: {e:?}");
LAST_REPORT_TIME_SLICE = time_slice;
}
}
})
.with_tag("pid", std::process::id())
.build();
Some((client, config.emit_duration))
}
Err(e) => {
eprintln!("unable to build statsd client: {e}");
None
}
}
} else {
None
}
}
fn parse_resolve_value(&mut self, v: &str) -> anyhow::Result<()> {
let mut parts = v.rsplitn(2, ':');
let ip = parts
.next()
.ok_or_else(|| anyhow!("no upstream field found"))?;
let upstream = parts.next().ok_or_else(|| anyhow!("no ip field found"))?;
let upstream = UpstreamAddr::from_str(upstream).context("invalid upstream addr")?;
let ip = IpAddr::from_str(ip).map_err(|e| anyhow!("invalid ip address: {e}"))?;
self.resolver.insert(upstream, ip);
Ok(())
}
pub(super) async fn resolve(
&self,
upstream: &UpstreamAddr,
) -> anyhow::Result<SelectiveVec<WeightedValue<SocketAddr>>> {
let mut builder = SelectiveVecBuilder::new();
if let Some(ip) = self.resolver.get(upstream) {
let addr = SocketAddr::new(*ip, upstream.port());
builder.insert(WeightedValue::new(addr));
} else {
let addrs = tokio::net::lookup_host(upstream.to_string())
.await
.map_err(|e| anyhow!("failed to resolve address for {upstream}: {e:?}"))?;
for addr in addrs {
builder.insert(WeightedValue::new(addr));
}
}
builder
.build()
.map_err(|e| anyhow!("failed to build vec: {e}"))
}
pub(super) fn select_peer<'a, T>(&'a self, peers: &'a SelectiveVec<WeightedValue<T>>) -> &'a T {
match self.peer_pick_policy {
SelectivePickPolicy::Random => peers.pick_random().inner(),
SelectivePickPolicy::Serial => peers.pick_serial().inner(),
SelectivePickPolicy::RoundRobin => peers.pick_round_robin().inner(),
_ => unreachable!(),
}
}
pub fn main_runtime(&self) -> BlendedRuntimeConfig {
if self.use_unaided_worker {
let mut main_runtime = BlendedRuntimeConfig::new();
main_runtime.set_thread_number(0);
main_runtime
} else {
let mut runtime = BlendedRuntimeConfig::new();
if let Some(thread_number) = self.thread_number {
runtime.set_thread_number(thread_number);
}
if let Some(thread_stack_size) = self.thread_stack_size {
runtime.set_thread_stack_size(thread_stack_size);
}
runtime
}
}
pub fn worker_runtime(&self) -> Option<UnaidedRuntimeConfig> {
if self.use_unaided_worker {
let mut runtime = UnaidedRuntimeConfig::new();
if let Some(thread_number) = self.thread_number {
runtime.set_thread_number(thread_number);
}
if let Some(thread_stack_size) = self.thread_stack_size {
runtime.set_thread_stack_size(thread_stack_size);
}
Some(runtime)
} else {
None
}
}
}
pub fn add_global_args(app: Command) -> Command {
app.arg(
Arg::new(GLOBAL_ARG_CONCURRENCY)
.help("Number of multiple requests to make at a time")
.value_name("CONCURRENCY COUNT")
.short('c')
.long(GLOBAL_ARG_CONCURRENCY)
.global(true)
.num_args(1)
.value_parser(value_parser!(usize))
.default_value("1"),
)
.arg(
Arg::new(GLOBAL_ARG_TIME_LIMIT)
.help("Maximum time to spend for benchmarking")
.value_name("TOTAL TIME")
.global(true)
.short('t')
.long(GLOBAL_ARG_TIME_LIMIT)
.num_args(1),
)
.arg(
Arg::new(GLOBAL_ARG_REQUESTS)
.help("Number of requests to perform")
.value_name("REQUEST COUNT")
.global(true)
.short('n')
.long(GLOBAL_ARG_REQUESTS)
.num_args(1)
.value_parser(value_parser!(usize)),
// FIXME use default_value and default_value_if(GLOBAL_ARG_TIME_LIMIT, None, None)
// after these methods support global args
)
.arg(
Arg::new(GLOBAL_ARG_RESOLVE)
.help("Provide a custom address for a specific host and port pair")
.value_name("host:port:addr")
.global(true)
.long(GLOBAL_ARG_RESOLVE)
.action(ArgAction::Append),
)
.arg(
Arg::new(GLOBAL_ARG_UNAIDED)
.help("Use unaided worker for tasks")
.global(true)
.long(GLOBAL_ARG_UNAIDED)
.action(ArgAction::SetTrue),
)
.arg(
Arg::new(GLOBAL_ARG_UNCONSTRAINED)
.help("Run benchmark task unconstrained")
.global(true)
.long(GLOBAL_ARG_UNCONSTRAINED)
.action(ArgAction::SetTrue),
)
.arg(
Arg::new(GLOBAL_ARG_THREADS)
.help("Number of threads")
.value_name("THREAD NUMBER")
.long(GLOBAL_ARG_THREADS)
.global(true)
.num_args(1)
.value_parser(value_parser!(usize)),
)
.arg(
Arg::new(GLOBAL_ARG_THREAD_STACK_SIZE)
.long(GLOBAL_ARG_THREAD_STACK_SIZE)
.value_name("STACK SIZE")
.global(true)
.num_args(1),
)
.arg(
Arg::new(GLOBAL_ARG_LOG_ERROR)
.help("Number of error requests to log")
.value_name("COUNT")
.long(GLOBAL_ARG_LOG_ERROR)
.global(true)
.num_args(1)
.value_parser(value_parser!(usize)),
)
.arg(
Arg::new(GLOBAL_ARG_EMIT_METRICS)
.help("Set if we need to emit metrics to statsd")
.action(ArgAction::SetTrue)
.long(GLOBAL_ARG_EMIT_METRICS)
.global(true),
)
.arg(
Arg::new(GLOBAL_ARG_STATSD_TARGET_UDP)
.help("Set the udp statsd target address")
.value_name("UDP SOCKET ADDRESS")
.long(GLOBAL_ARG_STATSD_TARGET_UDP)
.global(true)
.num_args(1)
.value_parser(value_parser!(SocketAddr)),
)
.arg(
Arg::new(GLOBAL_ARG_STATSD_TARGET_UNIX)
.help("Set the unix statsd target address")
.value_name("UNIX SOCKET ADDRESS")
.long(GLOBAL_ARG_STATSD_TARGET_UNIX)
.global(true)
.num_args(1)
.value_hint(ValueHint::FilePath)
.value_parser(value_parser!(PathBuf)),
)
.arg(
Arg::new(GLOBAL_ARG_NO_PROGRESS_BAR)
.help("Disable progress bar")
.action(ArgAction::SetTrue)
.long(GLOBAL_ARG_NO_PROGRESS_BAR)
.global(true),
)
.arg(
Arg::new(GLOBAL_ARG_PEER_PICK_POLICY)
.help("Set the pick policy for selecting peers")
.long(GLOBAL_ARG_PEER_PICK_POLICY)
.global(true)
.value_parser(["rr", "random", "serial"])
.default_value("rr")
.num_args(1),
)
.arg(
Arg::new(GLOBAL_ARG_TCP_LIMIT_SHIFT)
.help("Shift value for the TCP per connection rate limit config")
.value_name("SHIFT VALUE")
.long(GLOBAL_ARG_TCP_LIMIT_SHIFT)
.global(true)
.num_args(1)
.value_parser(["2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12"])
.default_value("10")
.requires(GLOBAL_ARG_TCP_LIMIT_BYTES),
)
.arg(
Arg::new(GLOBAL_ARG_TCP_LIMIT_BYTES)
.help("Bytes value for the TCP per connect rate limit config")
.value_name("BYTES COUNT")
.long(GLOBAL_ARG_TCP_LIMIT_BYTES)
.global(true)
.num_args(1)
.value_parser(value_parser!(usize)),
)
}
pub fn parse_global_args(args: &ArgMatches) -> anyhow::Result<ProcArgs> {
let mut proc_args = ProcArgs::default();
if let Some(n) = args.get_one::<usize>(GLOBAL_ARG_CONCURRENCY) {
proc_args.concurrency = *n;
}
if let Some(n) = args.get_one::<usize>(GLOBAL_ARG_REQUESTS) {
proc_args.requests = Some(*n);
}
if let Some(values) = args.get_many::<String>(GLOBAL_ARG_RESOLVE) {
for v in values {
proc_args
.parse_resolve_value(v)
.context(format!("invalid resolve string {v}"))?;
}
}
proc_args.time_limit = g3_clap::humanize::get_duration(args, GLOBAL_ARG_TIME_LIMIT)?;
if args.get_flag(GLOBAL_ARG_UNAIDED) {
proc_args.use_unaided_worker = true;
}
if args.get_flag(GLOBAL_ARG_UNCONSTRAINED) {
proc_args.task_unconstrained = true;
}
if let Some(n) = args.get_one::<usize>(GLOBAL_ARG_THREADS) {
proc_args.thread_number = Some(*n);
}
if let Some(stack_size) = g3_clap::humanize::get_usize(args, GLOBAL_ARG_THREAD_STACK_SIZE)? {
if stack_size > 0 {
proc_args.thread_stack_size = Some(stack_size);
}
}
if let Some(n) = args.get_one::<usize>(GLOBAL_ARG_LOG_ERROR) {
proc_args.log_error_count = *n;
}
if args.get_flag(GLOBAL_ARG_EMIT_METRICS) {
let mut config =
StatsdClientConfig::with_prefix(MetricsName::from_str(DEFAULT_STAT_PREFIX).unwrap());
if let Some(addr) = args.get_one::<SocketAddr>(GLOBAL_ARG_STATSD_TARGET_UDP) {
config.set_backend(StatsdBackend::Udp(*addr, None));
}
if let Some(path) = args.get_one::<PathBuf>(GLOBAL_ARG_STATSD_TARGET_UNIX) {
config.set_backend(StatsdBackend::Unix(path.clone()));
}
proc_args.statsd_client_config = Some(config);
}
if args.get_flag(GLOBAL_ARG_NO_PROGRESS_BAR) || proc_args.requests.is_none() {
proc_args.no_progress_bar = true;
}
if let Some(s) = args.get_one::<String>(GLOBAL_ARG_PEER_PICK_POLICY) {
proc_args.peer_pick_policy = SelectivePickPolicy::from_str(s).unwrap();
}
if let Some(bytes) = args.get_one::<usize>(GLOBAL_ARG_TCP_LIMIT_BYTES) {
let shift = args.get_one::<String>(GLOBAL_ARG_TCP_LIMIT_SHIFT).unwrap();
let shift = u8::from_str(shift).unwrap();
proc_args.tcp_sock_speed_limit.shift_millis = shift;
proc_args.tcp_sock_speed_limit.max_north = *bytes;
proc_args.tcp_sock_speed_limit.max_south = *bytes;
}
if proc_args.time_limit.is_none() && proc_args.requests.is_none() {
proc_args.requests = Some(1);
}
Ok(proc_args)
}

View file

@ -0,0 +1,37 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use tokio::io::{AsyncRead, AsyncWrite, BufReader};
use g3_io_ext::{LimitedReader, LimitedWriter};
pub(super) type BoxHttpForwardWriter = Box<dyn AsyncWrite + Send + Unpin>;
pub(super) type BoxHttpForwardReader = Box<dyn AsyncRead + Send + Unpin>;
pub(super) type BoxHttpForwardConnection = (BoxHttpForwardReader, BoxHttpForwardWriter);
pub(super) struct SavedHttpForwardConnection {
pub(super) reader: BufReader<LimitedReader<BoxHttpForwardReader>>,
pub(super) writer: LimitedWriter<BoxHttpForwardWriter>,
}
impl SavedHttpForwardConnection {
pub(super) fn new(
reader: BufReader<LimitedReader<BoxHttpForwardReader>>,
writer: LimitedWriter<BoxHttpForwardWriter>,
) -> Self {
SavedHttpForwardConnection { reader, writer }
}
}

View file

@ -0,0 +1,73 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::Arc;
use clap::{ArgMatches, Command};
use super::{BenchTarget, BenchTaskContext, ProcArgs};
use crate::target::http::{HttpHistogram, HttpHistogramRecorder, HttpRuntimeStats};
mod connection;
use connection::{BoxHttpForwardConnection, SavedHttpForwardConnection};
mod opts;
use opts::BenchHttpArgs;
mod task;
use task::HttpTaskContext;
pub const COMMAND: &str = "h1";
struct HttpTarget {
args: Arc<BenchHttpArgs>,
proc_args: Arc<ProcArgs>,
stats: Arc<HttpRuntimeStats>,
histogram: Option<HttpHistogram>,
}
impl BenchTarget<HttpRuntimeStats, HttpHistogram, HttpTaskContext> for HttpTarget {
fn new_context(&self) -> anyhow::Result<HttpTaskContext> {
let histogram_recorder = self.histogram.as_ref().map(|h| h.recorder());
HttpTaskContext::new(&self.args, &self.proc_args, &self.stats, histogram_recorder)
}
fn fetch_runtime_stats(&self) -> Arc<HttpRuntimeStats> {
self.stats.clone()
}
fn take_histogram(&mut self) -> Option<HttpHistogram> {
self.histogram.take()
}
}
pub fn command() -> Command {
opts::add_http_args(Command::new(COMMAND))
}
pub async fn run(proc_args: &Arc<ProcArgs>, cmd_args: &ArgMatches) -> anyhow::Result<()> {
let mut http_args = opts::parse_http_args(cmd_args)?;
http_args.resolve_target_address(proc_args).await?;
let target = HttpTarget {
args: Arc::new(http_args),
proc_args: Arc::clone(proc_args),
stats: Arc::new(HttpRuntimeStats::new(COMMAND)),
histogram: Some(HttpHistogram::new()),
};
super::run(target, proc_args).await
}

View file

@ -0,0 +1,542 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::borrow::Cow;
use std::io;
use std::net::{IpAddr, SocketAddr};
use std::pin::Pin;
use std::str::FromStr;
use std::time::Duration;
use anyhow::{anyhow, Context};
use clap::{value_parser, Arg, ArgAction, ArgMatches, Command};
use http::{Method, StatusCode};
use openssl::ssl::SslVerifyMode;
use tokio::io::{AsyncRead, AsyncWrite, BufReader};
use tokio::net::TcpStream;
use tokio_openssl::SslStream;
use url::Url;
use g3_io_ext::AggregatedIo;
use g3_types::collection::{SelectiveVec, WeightedValue};
use g3_types::net::{
HttpAuth, HttpProxy, OpensslTlsClientConfig, OpensslTlsClientConfigBuilder, Proxy, UpstreamAddr,
};
use super::{BoxHttpForwardConnection, ProcArgs};
use crate::target::{AppendTlsArgs, OpensslTlsClientArgs};
const HTTP_ARG_URL: &str = "url";
const HTTP_ARG_METHOD: &str = "method";
const HTTP_ARG_PROXY: &str = "proxy";
const HTTP_ARG_PROXY_TUNNEL: &str = "proxy-tunnel";
const HTTP_ARG_LOCAL_ADDRESS: &str = "local-address";
const HTTP_ARG_NO_KEEPALIVE: &str = "no-keepalive";
const HTTP_ARG_OK_STATUS: &str = "ok-status";
const HTTP_ARG_TIMEOUT: &str = "timeout";
const HTTP_ARG_HEADER_SIZE: &str = "header-size";
const HTTP_ARG_CONNECT_TIMEOUT: &str = "connect-timeout";
pub(super) struct BenchHttpArgs {
pub(super) method: Method,
target_url: Url,
forward_proxy: Option<HttpProxy>,
connect_proxy: Option<Proxy>,
bind: Option<IpAddr>,
pub(super) no_keepalive: bool,
pub(super) ok_status: Option<StatusCode>,
pub(super) timeout: Duration,
pub(super) max_header_size: usize,
pub(super) connect_timeout: Duration,
target_tls: OpensslTlsClientArgs,
proxy_tls: OpensslTlsClientArgs,
host: UpstreamAddr,
auth: HttpAuth,
peer_addrs: SelectiveVec<WeightedValue<SocketAddr>>,
}
impl BenchHttpArgs {
fn new(url: Url) -> anyhow::Result<Self> {
let upstream = UpstreamAddr::try_from(&url)?;
let auth = HttpAuth::try_from(&url)
.map_err(|e| anyhow!("failed to detect upstream auth method: {e}"))?;
let mut target_tls = OpensslTlsClientArgs::default();
if url.scheme() == "https" {
target_tls.config = Some(OpensslTlsClientConfigBuilder::with_cache_for_one_site());
}
Ok(BenchHttpArgs {
method: Method::GET,
target_url: url,
forward_proxy: None,
connect_proxy: None,
bind: None,
no_keepalive: false,
ok_status: None,
timeout: Duration::from_secs(30),
max_header_size: 4096,
connect_timeout: Duration::from_secs(15),
target_tls,
proxy_tls: OpensslTlsClientArgs::default(),
host: upstream,
auth,
peer_addrs: SelectiveVec::empty(),
})
}
pub(super) async fn resolve_target_address(
&mut self,
proc_args: &ProcArgs,
) -> anyhow::Result<()> {
let host = if let Some(proxy) = &self.connect_proxy {
proxy.peer()
} else if let Some(proxy) = &self.forward_proxy {
proxy.peer()
} else {
&self.host
};
self.peer_addrs = proc_args.resolve(host).await?;
Ok(())
}
pub(super) async fn new_tcp_connection(
&self,
proc_args: &ProcArgs,
) -> anyhow::Result<TcpStream> {
let peer = *proc_args.select_peer(&self.peer_addrs);
let socket = g3_socket::tcp::new_socket_to(
peer.ip(),
self.bind,
&Default::default(),
&Default::default(),
!self.no_keepalive,
)
.map_err(|e| anyhow!("failed to setup socket to {peer}: {e:?}"))?;
socket
.connect(peer)
.await
.map_err(|e| anyhow!("connect to {peer} error: {e:?}"))
}
pub(super) async fn new_http_connection(
&self,
proc_args: &ProcArgs,
) -> anyhow::Result<BoxHttpForwardConnection> {
if let Some(proxy) = &self.connect_proxy {
match proxy {
Proxy::Http(http_proxy) => {
let stream = self.new_tcp_connection(proc_args).await.context(format!(
"failed to connect to http proxy {}",
http_proxy.peer()
))?;
if let Some(tls_config) = &self.proxy_tls.client {
let tls_stream = self
.tls_connect_to_proxy(tls_config, http_proxy.peer(), stream)
.await?;
let (r, mut w) = tokio::io::split(tls_stream);
let mut buf_r = BufReader::new(r);
g3_http::connect::client::http_connect_to(
&mut buf_r,
&mut w,
&http_proxy.auth,
&self.host,
)
.await
.map_err(|e| {
anyhow!("http connect to {} failed: {e}", http_proxy.peer())
})?;
if let Some(tls_client) = &self.target_tls.client {
self.tls_connect_to_peer(
tls_client,
AggregatedIo::new(buf_r.into_inner(), w),
)
.await
} else {
Ok((Box::new(buf_r), Box::new(w)))
}
} else {
let (r, mut w) = stream.into_split();
let mut buf_r = BufReader::new(r);
g3_http::connect::client::http_connect_to(
&mut buf_r,
&mut w,
&http_proxy.auth,
&self.host,
)
.await
.map_err(|e| {
anyhow!("http connect to {} failed: {e}", http_proxy.peer())
})?;
if let Some(tls_client) = &self.target_tls.client {
self.tls_connect_to_peer(
tls_client,
AggregatedIo::new(buf_r.into_inner(), w),
)
.await
} else {
Ok((Box::new(buf_r), Box::new(w)))
}
}
}
Proxy::Socks4(socks4_proxy) => {
let stream = self.new_tcp_connection(proc_args).await.context(format!(
"failed to connect to socks4 proxy {}",
socks4_proxy.peer()
))?;
let (mut r, mut w) = stream.into_split();
g3_socks::v4a::client::socks4a_connect_to(&mut r, &mut w, &self.host)
.await
.map_err(|e| {
anyhow!("socks4a connect to {} failed: {e}", socks4_proxy.peer())
})?;
if let Some(tls_client) = &self.target_tls.client {
self.tls_connect_to_peer(tls_client, AggregatedIo::new(r, w))
.await
} else {
Ok((Box::new(BufReader::new(r)), Box::new(w)))
}
}
Proxy::Socks5(socks5_proxy) => {
let stream = self.new_tcp_connection(proc_args).await.context(format!(
"failed to connect to socks5 proxy {}",
socks5_proxy.peer()
))?;
let (mut r, mut w) = stream.into_split();
g3_socks::v5::client::socks5_connect_to(
&mut r,
&mut w,
&socks5_proxy.auth,
&self.host,
)
.await
.map_err(|e| {
anyhow!("socks5 connect to {} failed: {e}", socks5_proxy.peer())
})?;
if let Some(tls_client) = &self.target_tls.client {
self.tls_connect_to_peer(tls_client, AggregatedIo::new(r, w))
.await
} else {
Ok((Box::new(BufReader::new(r)), Box::new(w)))
}
}
}
} else if let Some(proxy) = &self.forward_proxy {
let stream = self
.new_tcp_connection(proc_args)
.await
.context(format!("failed to connect to http proxy {}", proxy.peer()))?;
if let Some(tls_client) = &self.proxy_tls.client {
let tls_stream = self
.tls_connect_to_proxy(tls_client, proxy.peer(), stream)
.await?;
let (r, w) = tokio::io::split(tls_stream);
Ok((Box::new(BufReader::new(r)), Box::new(w)))
} else {
let (r, w) = stream.into_split();
Ok((Box::new(BufReader::new(r)), Box::new(w)))
}
} else {
let stream = self
.new_tcp_connection(proc_args)
.await
.context(format!("failed to connect to target host {}", self.host))?;
if let Some(tls_client) = &self.target_tls.client {
self.tls_connect_to_peer(tls_client, stream).await
} else {
let (r, w) = stream.into_split();
Ok((Box::new(BufReader::new(r)), Box::new(w)))
}
}
}
async fn tls_connect_to_peer<S>(
&self,
tls_client: &OpensslTlsClientConfig,
stream: S,
) -> anyhow::Result<BoxHttpForwardConnection>
where
S: AsyncRead + AsyncWrite + Unpin + Send + 'static,
{
let tls_name = self
.target_tls
.tls_name
.as_ref()
.map(|v| Cow::Borrowed(v.as_str()))
.unwrap_or_else(|| self.host.host_str());
let mut ssl = tls_client
.build_ssl(&tls_name, self.host.port())
.context("failed to build ssl context")?;
if self.target_tls.no_verify {
ssl.set_verify(SslVerifyMode::NONE);
}
let mut tls_stream = SslStream::new(ssl, stream)
.map_err(|e| anyhow!("tls connect to {tls_name} failed: {e}"))?;
Pin::new(&mut tls_stream)
.connect()
.await
.map_err(|e| anyhow!("tls connect to {tls_name} failed: {e}"))?;
let (r, w) = tokio::io::split(tls_stream);
Ok((Box::new(BufReader::new(r)), Box::new(w)))
}
async fn tls_connect_to_proxy(
&self,
tls_client: &OpensslTlsClientConfig,
peer: &UpstreamAddr,
stream: TcpStream,
) -> anyhow::Result<SslStream<TcpStream>> {
let tls_name = self
.proxy_tls
.tls_name
.as_ref()
.map(|v| Cow::Borrowed(v.as_str()))
.unwrap_or_else(|| peer.host_str());
let mut ssl = tls_client
.build_ssl(&tls_name, peer.port())
.context("failed to build ssl context")?;
if self.proxy_tls.no_verify {
ssl.set_verify(SslVerifyMode::NONE);
}
let mut tls_stream = SslStream::new(ssl, stream)
.map_err(|e| anyhow!("tls connect to {tls_name} failed: {e}"))?;
Pin::new(&mut tls_stream)
.connect()
.await
.map_err(|e| anyhow!("tls connect to {tls_name} failed: {e}"))?;
Ok(tls_stream)
}
fn write_request_line<W: io::Write>(&self, buf: &mut W) -> io::Result<()> {
write!(buf, "{} ", self.method)?;
if self.forward_proxy.is_some() {
write!(buf, "{}://{}", self.target_url.scheme(), self.host)?;
}
buf.write_all(self.target_url.path().as_bytes())?;
if let Some(s) = self.target_url.query() {
write!(buf, "?{s}")?;
}
buf.write_all(b" HTTP/1.1\r\n")?; // TODO allow to use http1.0 ?
Ok(())
}
pub(super) fn write_fixed_request_header<W: io::Write>(&self, buf: &mut W) -> io::Result<()> {
self.write_request_line(buf)?;
write!(buf, "Host: {}\r\n", self.host)?;
if let Some(p) = &self.forward_proxy {
match &p.auth {
HttpAuth::None => {}
HttpAuth::Basic(basic) => {
buf.write_all(b"Proxy-Authorization: Basic ")?;
buf.write_all(basic.encoded_value().as_bytes())?;
buf.write_all(b"\r\n")?;
}
}
}
match &self.auth {
HttpAuth::None => {}
HttpAuth::Basic(basic) => {
buf.write_all(b"Authorization: Basic ")?;
buf.write_all(basic.encoded_value().as_bytes())?;
buf.write_all(b"\r\n")?;
}
}
if self.no_keepalive {
buf.write_all(b"Connection: close\r\n")?;
} else {
buf.write_all(b"Connection: keep-alive\r\n")?;
}
Ok(())
}
}
pub(super) fn add_http_args(app: Command) -> Command {
app.arg(Arg::new(HTTP_ARG_URL).required(true).num_args(1))
.arg(
Arg::new(HTTP_ARG_METHOD)
.value_name("METHOD")
.short('m')
.long(HTTP_ARG_METHOD)
.num_args(1)
.value_parser(["GET", "HEAD"])
.default_value("GET"),
)
.arg(
Arg::new(HTTP_ARG_PROXY)
.value_name("PROXY URL")
.short('x')
.help("use a proxy")
.long(HTTP_ARG_PROXY)
.num_args(1)
.value_name("PROXY URL"),
)
.arg(
Arg::new(HTTP_ARG_PROXY_TUNNEL)
.short('p')
.long(HTTP_ARG_PROXY_TUNNEL)
.action(ArgAction::SetTrue)
.help("Use tunnel if the proxy is an HTTP proxy"),
)
.arg(
Arg::new(HTTP_ARG_LOCAL_ADDRESS)
.value_name("LOCAL IP ADDRESS")
.short('B')
.long(HTTP_ARG_LOCAL_ADDRESS)
.num_args(1)
.value_parser(value_parser!(IpAddr)),
)
.arg(
Arg::new(HTTP_ARG_NO_KEEPALIVE)
.help("Disable http keepalive")
.action(ArgAction::SetTrue)
.long(HTTP_ARG_NO_KEEPALIVE),
)
.arg(
Arg::new(HTTP_ARG_OK_STATUS)
.help("Only treat this status code as success")
.value_name("STATUS CODE")
.long(HTTP_ARG_OK_STATUS)
.num_args(1)
.value_parser(value_parser!(StatusCode)),
)
.arg(
Arg::new(HTTP_ARG_TIMEOUT)
.value_name("TIMEOUT DURATION")
.help("Http response timeout")
.default_value("30s")
.long(HTTP_ARG_TIMEOUT)
.num_args(1),
)
.arg(
Arg::new(HTTP_ARG_HEADER_SIZE)
.value_name("SIZE")
.help("Set max response header size")
.long(HTTP_ARG_HEADER_SIZE)
.num_args(1)
.value_parser(value_parser!(usize)),
)
.arg(
Arg::new(HTTP_ARG_CONNECT_TIMEOUT)
.value_name("TIMEOUT DURATION")
.help("Timeout for connection to next peer")
.default_value("15s")
.long(HTTP_ARG_CONNECT_TIMEOUT)
.num_args(1),
)
.append_tls_args()
.append_proxy_tls_args()
}
pub(super) fn parse_http_args(args: &ArgMatches) -> anyhow::Result<BenchHttpArgs> {
let url = if let Some(v) = args.get_one::<String>(HTTP_ARG_URL) {
Url::parse(v).context(format!("invalid {HTTP_ARG_URL} value"))?
} else {
return Err(anyhow!("no target url set"));
};
let mut h1_args = BenchHttpArgs::new(url)?;
if let Some(v) = args.get_one::<String>(HTTP_ARG_METHOD) {
let method = Method::from_str(v).context(format!("invalid {HTTP_ARG_METHOD} value"))?;
h1_args.method = method;
}
if let Some(v) = args.get_one::<String>(HTTP_ARG_PROXY) {
let url = Url::parse(v).context(format!("invalid {HTTP_ARG_PROXY} value"))?;
let proxy = Proxy::try_from(&url).map_err(|e| anyhow!("invalid proxy: {e}"))?;
if let Proxy::Http(mut http_proxy) = proxy {
h1_args.proxy_tls.config = http_proxy.tls_config.take();
if args.get_flag(HTTP_ARG_PROXY_TUNNEL) {
h1_args.connect_proxy = Some(Proxy::Http(http_proxy));
} else {
h1_args.forward_proxy = Some(http_proxy);
}
} else {
h1_args.connect_proxy = Some(proxy);
}
}
if let Some(ip) = args.get_one::<IpAddr>(HTTP_ARG_LOCAL_ADDRESS) {
h1_args.bind = Some(*ip);
}
if args.get_flag(HTTP_ARG_NO_KEEPALIVE) {
h1_args.no_keepalive = true;
}
if let Some(code) = args.get_one::<StatusCode>(HTTP_ARG_OK_STATUS) {
h1_args.ok_status = Some(*code);
}
if let Some(timeout) = g3_clap::humanize::get_duration(args, HTTP_ARG_TIMEOUT)? {
h1_args.timeout = timeout;
}
if let Some(header_size) = g3_clap::humanize::get_usize(args, HTTP_ARG_HEADER_SIZE)? {
h1_args.max_header_size = header_size;
}
if let Some(timeout) = g3_clap::humanize::get_duration(args, HTTP_ARG_CONNECT_TIMEOUT)? {
h1_args.connect_timeout = timeout;
}
h1_args
.target_tls
.parse_tls_args(args)
.context("invalid target tls config")?;
h1_args
.proxy_tls
.parse_proxy_tls_args(args)
.context("invalid proxy tls config")?;
match h1_args.target_url.scheme() {
"http" | "https" => {}
"ftp" => {
if h1_args.forward_proxy.is_none() {
return Err(anyhow!(
"forward proxy is required for target url {}",
h1_args.target_url
));
}
}
_ => return Err(anyhow!("unsupported target url {}", h1_args.target_url)),
}
Ok(h1_args)
}

View file

@ -0,0 +1,248 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::Arc;
use std::time::Duration;
use anyhow::{anyhow, Context};
use async_trait::async_trait;
use futures_util::FutureExt;
use tokio::io::{AsyncReadExt, AsyncWriteExt, BufReader};
use tokio::time::Instant;
use g3_http::client::HttpForwardRemoteResponse;
use g3_http::HttpBodyReader;
use g3_io_ext::{ArcLimitedReaderStats, ArcLimitedWriterStats, LimitedReader, LimitedWriter};
use super::{
BenchHttpArgs, BenchTaskContext, HttpHistogramRecorder, HttpRuntimeStats, ProcArgs,
SavedHttpForwardConnection,
};
pub(super) struct HttpTaskContext {
args: Arc<BenchHttpArgs>,
proc_args: Arc<ProcArgs>,
saved_connection: Option<SavedHttpForwardConnection>,
reuse_conn_count: u64,
runtime_stats: Arc<HttpRuntimeStats>,
histogram_recorder: Option<HttpHistogramRecorder>,
req_header: Vec<u8>,
req_header_fixed_len: usize,
}
impl HttpTaskContext {
pub(super) fn new(
args: &Arc<BenchHttpArgs>,
proc_args: &Arc<ProcArgs>,
runtime_stats: &Arc<HttpRuntimeStats>,
histogram_recorder: Option<HttpHistogramRecorder>,
) -> anyhow::Result<Self> {
let mut hdr_buf = Vec::with_capacity(1024);
args.write_fixed_request_header(&mut hdr_buf)
.map_err(|e| anyhow!("failed to generate request header: {}", e))?;
let req_header_fixed_len = hdr_buf.len();
Ok(HttpTaskContext {
args: Arc::clone(args),
proc_args: Arc::clone(proc_args),
saved_connection: None,
reuse_conn_count: 0,
runtime_stats: Arc::clone(runtime_stats),
histogram_recorder,
req_header: hdr_buf,
req_header_fixed_len,
})
}
async fn fetch_connection(&mut self) -> anyhow::Result<SavedHttpForwardConnection> {
if let Some(mut c) = self.saved_connection.take() {
let mut buf = [0u8; 4];
if c.reader.read(&mut buf).now_or_never().is_none() {
// no eof, reuse the old connection
self.reuse_conn_count += 1;
return Ok(c);
}
}
if let Some(r) = &mut self.histogram_recorder {
r.record_conn_reuse_count(self.reuse_conn_count);
}
self.reuse_conn_count = 0;
self.runtime_stats.add_conn_attempt();
let (r, w) = match tokio::time::timeout(
self.args.connect_timeout,
self.args.new_http_connection(&self.proc_args),
)
.await
{
Ok(Ok(c)) => c,
Ok(Err(e)) => return Err(e),
Err(_) => return Err(anyhow!("timeout to get new connection")),
};
self.runtime_stats.add_conn_success();
let r = LimitedReader::new(
r,
self.proc_args.tcp_sock_speed_limit.shift_millis,
self.proc_args.tcp_sock_speed_limit.max_south,
self.runtime_stats.clone() as ArcLimitedReaderStats,
);
let w = LimitedWriter::new(
w,
self.proc_args.tcp_sock_speed_limit.shift_millis,
self.proc_args.tcp_sock_speed_limit.max_north,
self.runtime_stats.clone() as ArcLimitedWriterStats,
);
Ok(SavedHttpForwardConnection::new(BufReader::new(r), w))
}
fn save_connection(&mut self, c: SavedHttpForwardConnection) {
self.saved_connection = Some(c);
}
fn reset_request_header(&mut self) {
// reset request header
self.req_header.truncate(self.req_header_fixed_len);
// TODO generate dynamic header
self.req_header.extend_from_slice(b"\r\n");
}
async fn run_with_connection(
&mut self,
time_started: Instant,
connection: &mut SavedHttpForwardConnection,
) -> anyhow::Result<bool> {
let keep_alive = !self.args.no_keepalive;
let ups_r = &mut connection.reader;
let ups_w = &mut connection.writer;
// send hdr
ups_w
.write_all(self.req_header.as_slice())
.await
.map_err(|e| anyhow!("failed to send request header: {e:?}"))?;
let send_hdr_time = time_started.elapsed();
if let Some(r) = &mut self.histogram_recorder {
r.record_send_hdr_time(send_hdr_time);
}
// recv hdr
let rsp = match tokio::time::timeout(
self.args.timeout,
HttpForwardRemoteResponse::parse(
ups_r,
&self.args.method,
keep_alive,
self.args.max_header_size,
),
)
.await
{
Ok(Ok(r)) => r,
Ok(Err(e)) => return Err(anyhow!("failed to read response: {e}")),
Err(_) => return Err(anyhow!("timeout to read response")),
};
let recv_hdr_time = time_started.elapsed();
if let Some(r) = &mut self.histogram_recorder {
r.record_recv_hdr_time(recv_hdr_time);
}
if let Some(ok_status) = self.args.ok_status {
if rsp.code != ok_status.as_u16() {
return Err(anyhow!(
"Got rsp code {} while {} is expected",
rsp.code,
ok_status.as_u16()
));
}
}
// recv body
if let Some(body_type) = rsp.body_type(&self.args.method) {
let mut body_reader = HttpBodyReader::new(ups_r, body_type, 2048);
let mut sink = tokio::io::sink();
tokio::io::copy(&mut body_reader, &mut sink)
.await
.map_err(|e| anyhow!("failed to read response body: {e:?}"))?;
}
Ok(keep_alive & rsp.keep_alive())
}
}
#[async_trait]
impl BenchTaskContext for HttpTaskContext {
fn mark_task_start(&self) {
self.runtime_stats.add_task_total();
self.runtime_stats.inc_task_alive();
}
fn mark_task_passed(&self) {
self.runtime_stats.add_task_passed();
self.runtime_stats.dec_task_alive();
}
fn mark_task_failed(&self) {
self.runtime_stats.add_task_failed();
self.runtime_stats.dec_task_alive();
}
async fn run(&mut self, _task_id: usize, time_started: Instant) -> anyhow::Result<()> {
self.reset_request_header();
let mut connection = self
.fetch_connection()
.await
.context("connect to upstream failed")?;
match self
.run_with_connection(time_started, &mut connection)
.await
{
Ok(keep_alive) => {
let total_time = time_started.elapsed();
if let Some(r) = &mut self.histogram_recorder {
r.record_total_time(total_time);
}
if keep_alive {
self.save_connection(connection);
} else {
let runtime_stats = self.runtime_stats.clone();
tokio::spawn(async move {
// make sure the tls ticket will be reused
match tokio::time::timeout(
Duration::from_secs(4),
connection.writer.shutdown(),
)
.await
{
Ok(Ok(_)) => {}
Ok(Err(_e)) => runtime_stats.add_conn_close_fail(),
Err(_) => runtime_stats.add_conn_close_timeout(),
}
});
}
Ok(())
}
Err(e) => Err(e),
}
}
}

View file

@ -0,0 +1,126 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::Arc;
use anyhow::anyhow;
use clap::{ArgMatches, Command};
use http::{HeaderValue, Method, Request, Uri, Version};
use super::{BenchTarget, BenchTaskContext, ProcArgs};
use crate::target::http::{HttpHistogram, HttpHistogramRecorder, HttpRuntimeStats};
mod opts;
use opts::BenchH2Args;
mod pool;
use pool::H2ConnectionPool;
mod task;
use task::H2TaskContext;
pub const COMMAND: &str = "h2";
struct H2Target {
args: Arc<BenchH2Args>,
proc_args: Arc<ProcArgs>,
stats: Arc<HttpRuntimeStats>,
histogram: Option<HttpHistogram>,
pool: Option<Arc<H2ConnectionPool>>,
}
impl BenchTarget<HttpRuntimeStats, HttpHistogram, H2TaskContext> for H2Target {
fn new_context(&self) -> anyhow::Result<H2TaskContext> {
let histogram_recorder = self.histogram.as_ref().map(|h| h.recorder());
H2TaskContext::new(
&self.args,
&self.proc_args,
&self.stats,
histogram_recorder,
self.pool.clone(),
)
}
fn fetch_runtime_stats(&self) -> Arc<HttpRuntimeStats> {
self.stats.clone()
}
fn take_histogram(&mut self) -> Option<HttpHistogram> {
self.histogram.take()
}
fn notify_finish(&mut self) {
self.pool = None;
}
}
pub fn command() -> Command {
opts::add_h2_args(Command::new(COMMAND))
}
pub async fn run(proc_args: &Arc<ProcArgs>, cmd_args: &ArgMatches) -> anyhow::Result<()> {
let mut h2_args = opts::parse_h2_args(cmd_args)?;
h2_args.resolve_target_address(proc_args).await?;
let h2_args = Arc::new(h2_args);
let runtime_stats = Arc::new(HttpRuntimeStats::new(COMMAND));
let histogram = Some(HttpHistogram::new());
let pool = h2_args.pool_size.map(|s| {
Arc::new(H2ConnectionPool::new(
&h2_args,
proc_args,
s,
&runtime_stats,
histogram.as_ref(),
))
});
let target = H2Target {
args: h2_args,
proc_args: Arc::clone(proc_args),
stats: runtime_stats,
histogram,
pool,
};
super::run(target, proc_args).await
}
struct H2PreRequest {
method: Method,
uri: Uri,
host: HeaderValue,
auth: Option<HeaderValue>,
}
impl H2PreRequest {
fn build_request(&self) -> anyhow::Result<Request<()>> {
let mut req = Request::builder()
.version(Version::HTTP_2)
.method(self.method.clone())
.uri(self.uri.clone())
.body(())
.map_err(|e| anyhow!("failed to build request: {e:?}"))?;
req.headers_mut()
.insert(http::header::HOST, self.host.clone());
if let Some(v) = &self.auth {
req.headers_mut()
.insert(http::header::AUTHORIZATION, v.clone());
}
Ok(req)
}
}

View file

@ -0,0 +1,524 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::borrow::Cow;
use std::net::{IpAddr, SocketAddr};
use std::pin::Pin;
use std::str::FromStr;
use std::sync::Arc;
use std::time::Duration;
use anyhow::{anyhow, Context};
use bytes::Bytes;
use clap::{value_parser, Arg, ArgAction, ArgMatches, Command};
use h2::client::SendRequest;
use http::{HeaderValue, Method, StatusCode};
use openssl::ssl::SslVerifyMode;
use tokio::io::{AsyncRead, AsyncWrite, BufReader};
use tokio::net::TcpStream;
use tokio_openssl::SslStream;
use url::Url;
use g3_io_ext::{AggregatedIo, LimitedStream};
use g3_types::collection::{SelectiveVec, WeightedValue};
use g3_types::net::{
AlpnProtocol, HttpAuth, OpensslTlsClientConfig, OpensslTlsClientConfigBuilder, Proxy,
UpstreamAddr,
};
use super::{H2PreRequest, HttpRuntimeStats, ProcArgs};
use crate::target::{AppendTlsArgs, OpensslTlsClientArgs};
const HTTP_ARG_CONNECTION_POOL: &str = "connection-pool";
const HTTP_ARG_URI: &str = "uri";
const HTTP_ARG_METHOD: &str = "method";
const HTTP_ARG_PROXY: &str = "proxy";
const HTTP_ARG_LOCAL_ADDRESS: &str = "local-address";
const HTTP_ARG_NO_MULTIPLEX: &str = "no-multiplex";
const HTTP_ARG_OK_STATUS: &str = "ok-status";
const HTTP_ARG_TIMEOUT: &str = "timeout";
const HTTP_ARG_CONNECT_TIMEOUT: &str = "connect-timeout";
pub(super) struct BenchH2Args {
pub(super) pool_size: Option<usize>,
pub(super) method: Method,
target_url: Url,
connect_proxy: Option<Proxy>,
bind: Option<IpAddr>,
pub(super) no_multiplex: bool,
pub(super) ok_status: Option<StatusCode>,
pub(super) timeout: Duration,
pub(super) connect_timeout: Duration,
target_tls: OpensslTlsClientArgs,
proxy_tls: OpensslTlsClientArgs,
host: UpstreamAddr,
auth: HttpAuth,
peer_addrs: SelectiveVec<WeightedValue<SocketAddr>>,
}
impl BenchH2Args {
fn new(url: Url) -> anyhow::Result<Self> {
let upstream = UpstreamAddr::try_from(&url)?;
let auth = HttpAuth::try_from(&url)
.map_err(|e| anyhow!("failed to detect upstream auth method: {e}"))?;
let mut target_tls = OpensslTlsClientArgs::default();
if url.scheme() == "https" {
target_tls.config = Some(OpensslTlsClientConfigBuilder::with_cache_for_one_site());
}
Ok(BenchH2Args {
pool_size: None,
method: Method::GET,
target_url: url,
connect_proxy: None,
bind: None,
no_multiplex: false,
ok_status: None,
timeout: Duration::from_secs(30),
connect_timeout: Duration::from_secs(15),
target_tls,
proxy_tls: OpensslTlsClientArgs::default(),
host: upstream,
auth,
peer_addrs: SelectiveVec::empty(),
})
}
pub(super) async fn resolve_target_address(
&mut self,
proc_args: &ProcArgs,
) -> anyhow::Result<()> {
let host = if let Some(proxy) = &self.connect_proxy {
proxy.peer()
} else {
&self.host
};
self.peer_addrs = proc_args.resolve(host).await?;
Ok(())
}
pub(super) async fn new_tcp_connection(
&self,
proc_args: &ProcArgs,
) -> anyhow::Result<TcpStream> {
let peer = *proc_args.select_peer(&self.peer_addrs);
let socket = g3_socket::tcp::new_socket_to(
peer.ip(),
self.bind,
&Default::default(),
&Default::default(),
true,
)
.map_err(|e| anyhow!("failed to setup socket to {peer}: {e:?}"))?;
socket
.connect(peer)
.await
.map_err(|e| anyhow!("connect to {peer} error: {e:?}"))
}
pub(super) async fn new_h2_connection(
&self,
stats: &Arc<HttpRuntimeStats>,
proc_args: &ProcArgs,
) -> anyhow::Result<SendRequest<Bytes>> {
if let Some(proxy) = &self.connect_proxy {
match proxy {
Proxy::Http(http_proxy) => {
let stream = self.new_tcp_connection(proc_args).await.context(format!(
"failed to connect to http proxy {}",
http_proxy.peer()
))?;
if let Some(tls_config) = &self.proxy_tls.client {
let tls_stream = self
.tls_connect_to_proxy(tls_config, http_proxy.peer(), stream)
.await?;
let (r, mut w) = tokio::io::split(tls_stream);
let mut buf_r = BufReader::new(r);
g3_http::connect::client::http_connect_to(
&mut buf_r,
&mut w,
&http_proxy.auth,
&self.host,
)
.await
.map_err(|e| {
anyhow!("http connect to {} failed: {e}", http_proxy.peer())
})?;
let stream = AggregatedIo::new(buf_r.into_inner(), w);
self.connect_to_target(proc_args, stream, stats).await
} else {
let (r, mut w) = stream.into_split();
let mut buf_r = BufReader::new(r);
g3_http::connect::client::http_connect_to(
&mut buf_r,
&mut w,
&http_proxy.auth,
&self.host,
)
.await
.map_err(|e| {
anyhow!("http connect to {} failed: {e}", http_proxy.peer())
})?;
let stream = AggregatedIo::new(buf_r.into_inner(), w);
self.connect_to_target(proc_args, stream, stats).await
}
}
Proxy::Socks4(socks4_proxy) => {
let stream = self.new_tcp_connection(proc_args).await.context(format!(
"failed to connect to socks4 proxy {}",
socks4_proxy.peer()
))?;
let (mut r, mut w) = stream.into_split();
g3_socks::v4a::client::socks4a_connect_to(&mut r, &mut w, &self.host)
.await
.map_err(|e| {
anyhow!("socks4a connect to {} failed: {e}", socks4_proxy.peer())
})?;
let stream = AggregatedIo::new(r, w);
self.connect_to_target(proc_args, stream, stats).await
}
Proxy::Socks5(socks5_proxy) => {
let stream = self.new_tcp_connection(proc_args).await.context(format!(
"failed to connect to socks5 proxy {}",
socks5_proxy.peer()
))?;
let (mut r, mut w) = stream.into_split();
g3_socks::v5::client::socks5_connect_to(
&mut r,
&mut w,
&socks5_proxy.auth,
&self.host,
)
.await
.map_err(|e| {
anyhow!("socks5 connect to {} failed: {e}", socks5_proxy.peer())
})?;
let stream = AggregatedIo::new(r, w);
self.connect_to_target(proc_args, stream, stats).await
}
}
} else {
let stream = self
.new_tcp_connection(proc_args)
.await
.context(format!("failed to connect to target host {}", self.host))?;
self.connect_to_target(proc_args, stream, stats).await
}
}
async fn connect_to_target<S>(
&self,
proc_args: &ProcArgs,
stream: S,
stats: &Arc<HttpRuntimeStats>,
) -> anyhow::Result<SendRequest<Bytes>>
where
S: AsyncRead + AsyncWrite + Unpin + Send + 'static,
{
if let Some(tls_client) = &self.target_tls.client {
let tls_stream = self
.tls_connect_to_target(tls_client, stream)
.await
.context("tls connect to target failed")?;
self.h2_handshake(proc_args, tls_stream, stats)
.await
.context("h2 handshake failed")
} else {
self.h2_handshake(proc_args, stream, stats)
.await
.context("h2 handshake failed")
}
}
async fn h2_handshake<S>(
&self,
proc_args: &ProcArgs,
stream: S,
stats: &Arc<HttpRuntimeStats>,
) -> anyhow::Result<SendRequest<Bytes>>
where
S: AsyncRead + AsyncWrite + Unpin + Send + 'static,
{
let speed_limit = &proc_args.tcp_sock_speed_limit;
let stream = LimitedStream::new(
stream,
speed_limit.shift_millis,
speed_limit.max_south,
speed_limit.max_north,
stats.clone(),
);
let mut client_builder = h2::client::Builder::new();
client_builder.max_concurrent_streams(1).enable_push(false);
let (h2s, h2s_connection) = h2::client::handshake(stream)
.await
.map_err(|e| anyhow!("h2 handshake failed: {e:?}"))?;
tokio::spawn(async move {
let _ = h2s_connection.await;
});
Ok(h2s)
}
async fn tls_connect_to_target<S>(
&self,
tls_client: &OpensslTlsClientConfig,
stream: S,
) -> anyhow::Result<SslStream<S>>
where
S: AsyncRead + AsyncWrite + Unpin + Send + 'static,
{
let tls_name = self
.target_tls
.tls_name
.as_ref()
.map(|v| Cow::Borrowed(v.as_str()))
.unwrap_or_else(|| self.host.host_str());
let mut ssl = tls_client
.build_ssl(&tls_name, self.host.port())
.context("failed to build ssl context")?;
if self.target_tls.no_verify {
ssl.set_verify(SslVerifyMode::NONE);
}
let mut tls_stream = SslStream::new(ssl, stream)
.map_err(|e| anyhow!("tls connect to {tls_name} failed: {e}"))?;
Pin::new(&mut tls_stream)
.connect()
.await
.map_err(|e| anyhow!("tls connect to {tls_name} failed: {e}"))?;
if let Some(alpn) = tls_stream.ssl().selected_alpn_protocol() {
if AlpnProtocol::from_buf(alpn) != Some(AlpnProtocol::Http2) {
return Err(anyhow!("invalid returned alpn protocol: {:?}", alpn));
}
}
Ok(tls_stream)
}
async fn tls_connect_to_proxy(
&self,
tls_client: &OpensslTlsClientConfig,
peer: &UpstreamAddr,
stream: TcpStream,
) -> anyhow::Result<SslStream<TcpStream>> {
let tls_name = self
.proxy_tls
.tls_name
.as_ref()
.map(|v| Cow::Borrowed(v.as_str()))
.unwrap_or_else(|| peer.host_str());
let mut ssl = tls_client
.build_ssl(&tls_name, peer.port())
.context("failed to build ssl context")?;
if self.proxy_tls.no_verify {
ssl.set_verify(SslVerifyMode::NONE);
}
let mut tls_stream = SslStream::new(ssl, stream)
.map_err(|e| anyhow!("tls connect to {tls_name} failed: {e}"))?;
Pin::new(&mut tls_stream)
.connect()
.await
.map_err(|e| anyhow!("tls connect to {tls_name} failed: {e}"))?;
Ok(tls_stream)
}
pub(super) fn build_pre_request_header(&self) -> anyhow::Result<H2PreRequest> {
let path_and_query = if let Some(q) = self.target_url.query() {
format!("{}?{q}", self.target_url.path())
} else {
self.target_url.path().to_string()
};
let uri = http::Uri::builder()
.scheme(self.target_url.scheme())
.authority(self.host.to_string())
.path_and_query(path_and_query)
.build()
.map_err(|e| anyhow!("failed to build request: {e:?}"))?;
let host_str = self.host.to_string();
let host =
HeaderValue::from_str(&host_str).map_err(|e| anyhow!("invalid host value: {e:?}"))?;
let auth = match &self.auth {
HttpAuth::None => None,
HttpAuth::Basic(basic) => {
let value = format!("Basic {}", basic.encoded_value());
let value = HeaderValue::from_str(&value)
.map_err(|e| anyhow!("invalid auth value: {e:?}"))?;
Some(value)
}
};
Ok(H2PreRequest {
method: self.method.clone(),
uri,
host,
auth,
})
}
}
pub(super) fn add_h2_args(app: Command) -> Command {
app.arg(Arg::new(HTTP_ARG_URI).required(true).num_args(1))
.arg(
Arg::new(HTTP_ARG_CONNECTION_POOL)
.help(
"Set the number of pooled underlying h2 connections.\n\
If not set, each concurrency will use it's own h2 connection",
)
.value_name("POOL SIZE")
.long(HTTP_ARG_CONNECTION_POOL)
.short('C')
.num_args(1)
.value_parser(value_parser!(usize))
.conflicts_with(HTTP_ARG_NO_MULTIPLEX),
)
.arg(
Arg::new(HTTP_ARG_METHOD)
.value_name("METHOD")
.short('m')
.long(HTTP_ARG_METHOD)
.num_args(1)
.value_parser(["GET", "HEAD"])
.default_value("GET"),
)
.arg(
Arg::new(HTTP_ARG_PROXY)
.value_name("PROXY URL")
.short('x')
.help("Use a proxy")
.long(HTTP_ARG_PROXY)
.num_args(1)
.value_name("PROXY URL"),
)
.arg(
Arg::new(HTTP_ARG_LOCAL_ADDRESS)
.value_name("LOCAL IP ADDRESS")
.short('B')
.long(HTTP_ARG_LOCAL_ADDRESS)
.num_args(1)
.value_parser(value_parser!(IpAddr)),
)
.arg(
Arg::new(HTTP_ARG_NO_MULTIPLEX)
.help("Disable h2 connection multiplexing")
.action(ArgAction::SetTrue)
.long(HTTP_ARG_NO_MULTIPLEX)
.conflicts_with(HTTP_ARG_CONNECTION_POOL),
)
.arg(
Arg::new(HTTP_ARG_OK_STATUS)
.help("Only treat this status code as success")
.value_name("STATUS CODE")
.long(HTTP_ARG_OK_STATUS)
.num_args(1)
.value_parser(value_parser!(StatusCode)),
)
.arg(
Arg::new(HTTP_ARG_TIMEOUT)
.help("Http response timeout")
.value_name("TIMEOUT DURATION")
.default_value("30s")
.long(HTTP_ARG_TIMEOUT)
.num_args(1),
)
.arg(
Arg::new(HTTP_ARG_CONNECT_TIMEOUT)
.help("Timeout for connection to next peer")
.value_name("TIMEOUT DURATION")
.default_value("15s")
.long(HTTP_ARG_CONNECT_TIMEOUT)
.num_args(1),
)
.append_tls_args()
.append_proxy_tls_args()
}
pub(super) fn parse_h2_args(args: &ArgMatches) -> anyhow::Result<BenchH2Args> {
let url = if let Some(v) = args.get_one::<String>(HTTP_ARG_URI) {
Url::parse(v).context(format!("invalid {HTTP_ARG_URI} value"))?
} else {
return Err(anyhow!("no target url set"));
};
let mut h2_args = BenchH2Args::new(url)?;
if let Some(c) = args.get_one::<usize>(HTTP_ARG_CONNECTION_POOL) {
if *c > 0 {
h2_args.pool_size = Some(*c);
}
}
if let Some(v) = args.get_one::<String>(HTTP_ARG_METHOD) {
let method = Method::from_str(v).context(format!("invalid {HTTP_ARG_METHOD} value"))?;
h2_args.method = method;
}
if let Some(v) = args.get_one::<String>(HTTP_ARG_PROXY) {
let url = Url::parse(v).context(format!("invalid {HTTP_ARG_PROXY} value"))?;
let proxy = Proxy::try_from(&url).map_err(|e| anyhow!("invalid proxy: {e}"))?;
h2_args.connect_proxy = Some(proxy);
}
if let Some(ip) = args.get_one::<IpAddr>(HTTP_ARG_LOCAL_ADDRESS) {
h2_args.bind = Some(*ip);
}
if args.get_flag(HTTP_ARG_NO_MULTIPLEX) {
h2_args.no_multiplex = true;
}
if let Some(code) = args.get_one::<StatusCode>(HTTP_ARG_OK_STATUS) {
h2_args.ok_status = Some(*code);
}
if let Some(timeout) = g3_clap::humanize::get_duration(args, HTTP_ARG_TIMEOUT)? {
h2_args.timeout = timeout;
}
if let Some(timeout) = g3_clap::humanize::get_duration(args, HTTP_ARG_CONNECT_TIMEOUT)? {
h2_args.connect_timeout = timeout;
}
h2_args
.target_tls
.parse_tls_args(args)
.context("invalid target tls config")?;
h2_args
.proxy_tls
.parse_proxy_tls_args(args)
.context("invalid proxy tls config")?;
match h2_args.target_url.scheme() {
"http" | "https" => {}
_ => return Err(anyhow!("unsupported target url {}", h2_args.target_url)),
}
Ok(h2_args)
}

View file

@ -0,0 +1,187 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
use anyhow::anyhow;
use bytes::Bytes;
use h2::client::SendRequest;
use tokio::sync::Mutex;
use super::{BenchH2Args, HttpHistogram, HttpHistogramRecorder, HttpRuntimeStats, ProcArgs};
struct H2ConnectionUnlocked {
args: Arc<BenchH2Args>,
proc_args: Arc<ProcArgs>,
index: usize,
h2s: Option<SendRequest<Bytes>>,
runtime_stats: Arc<HttpRuntimeStats>,
histogram_recorder: Option<HttpHistogramRecorder>,
reuse_conn_count: u64,
}
impl Drop for H2ConnectionUnlocked {
fn drop(&mut self) {
if let Some(r) = &mut self.histogram_recorder {
r.record_conn_reuse_count(self.reuse_conn_count);
}
self.reuse_conn_count = 0;
}
}
impl H2ConnectionUnlocked {
fn new(
args: Arc<BenchH2Args>,
proc_args: Arc<ProcArgs>,
index: usize,
runtime_stats: Arc<HttpRuntimeStats>,
histogram_recorder: Option<HttpHistogramRecorder>,
) -> Self {
H2ConnectionUnlocked {
args,
proc_args,
index,
h2s: None,
runtime_stats,
histogram_recorder,
reuse_conn_count: 0,
}
}
async fn fetch_stream(&mut self) -> anyhow::Result<SendRequest<Bytes>> {
if let Some(h2s) = self.h2s.clone() {
if let Ok(send_req) = h2s.ready().await {
self.reuse_conn_count += 1;
return Ok(send_req);
}
}
if let Some(r) = &mut self.histogram_recorder {
r.record_conn_reuse_count(self.reuse_conn_count);
}
self.reuse_conn_count = 0;
self.runtime_stats.add_conn_attempt();
let new_h2s = match tokio::time::timeout(
self.args.connect_timeout,
self.args
.new_h2_connection(&self.runtime_stats, &self.proc_args),
)
.await
{
Ok(Ok(h2s)) => h2s,
Ok(Err(e)) => return Err(e.context(format!("P#{} new connection failed", self.index))),
Err(_) => return Err(anyhow!("timeout to get new connection")),
};
self.runtime_stats.add_conn_success();
let s = new_h2s
.clone()
.ready()
.await
.map_err(|e| anyhow!("P#{} failed to open new stream: {e:?}", self.index))?;
self.h2s = Some(new_h2s);
Ok(s)
}
}
struct H2Connection {
inner: Mutex<H2ConnectionUnlocked>,
}
impl H2Connection {
fn new(
args: Arc<BenchH2Args>,
proc_args: Arc<ProcArgs>,
index: usize,
runtime_stats: Arc<HttpRuntimeStats>,
histogram_recorder: Option<HttpHistogramRecorder>,
) -> Self {
H2Connection {
inner: Mutex::new(H2ConnectionUnlocked::new(
args,
proc_args,
index,
runtime_stats,
histogram_recorder,
)),
}
}
async fn fetch_stream(&self) -> anyhow::Result<SendRequest<Bytes>> {
let mut inner = self.inner.lock().await;
inner.fetch_stream().await
}
}
pub(super) struct H2ConnectionPool {
pool: Vec<H2Connection>,
pool_size: usize,
cur_index: AtomicUsize,
}
impl H2ConnectionPool {
pub(super) fn new(
args: &Arc<BenchH2Args>,
proc_args: &Arc<ProcArgs>,
pool_size: usize,
runtime_stats: &Arc<HttpRuntimeStats>,
histogram_stats: Option<&HttpHistogram>,
) -> Self {
let mut pool = Vec::with_capacity(pool_size);
for i in 0..pool_size {
pool.push(H2Connection::new(
args.clone(),
proc_args.clone(),
i,
runtime_stats.clone(),
histogram_stats.map(|s| s.recorder()),
));
}
H2ConnectionPool {
pool,
pool_size,
cur_index: AtomicUsize::new(0),
}
}
pub(super) async fn fetch_stream(&self) -> anyhow::Result<SendRequest<Bytes>> {
match self.pool_size {
0 => Err(anyhow!("no connections configured for this pool")),
1 => self.pool[0].fetch_stream().await,
_ => {
let mut indent = self.cur_index.load(Ordering::Acquire);
loop {
let mut next = indent + 1;
if next >= self.pool_size {
next = 0;
}
match self.cur_index.compare_exchange(
indent,
next,
Ordering::AcqRel,
Ordering::Acquire,
) {
Ok(_) => return self.pool.get(indent).unwrap().fetch_stream().await,
Err(v) => indent = v,
}
}
}
}
}
}

View file

@ -0,0 +1,218 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::Arc;
use anyhow::{anyhow, Context};
use async_trait::async_trait;
use bytes::Bytes;
use h2::client::SendRequest;
use tokio::time::Instant;
use super::{
BenchH2Args, BenchTaskContext, H2ConnectionPool, H2PreRequest, HttpHistogramRecorder,
HttpRuntimeStats, ProcArgs,
};
pub(super) struct H2TaskContext {
args: Arc<BenchH2Args>,
proc_args: Arc<ProcArgs>,
pool: Option<Arc<H2ConnectionPool>>,
h2s: Option<SendRequest<Bytes>>,
reuse_conn_count: u64,
pre_request: H2PreRequest,
runtime_stats: Arc<HttpRuntimeStats>,
histogram_recorder: Option<HttpHistogramRecorder>,
}
impl Drop for H2TaskContext {
fn drop(&mut self) {
if let Some(r) = &mut self.histogram_recorder {
r.record_conn_reuse_count(self.reuse_conn_count);
}
}
}
impl H2TaskContext {
pub(super) fn new(
args: &Arc<BenchH2Args>,
proc_args: &Arc<ProcArgs>,
runtime_stats: &Arc<HttpRuntimeStats>,
histogram_recorder: Option<HttpHistogramRecorder>,
pool: Option<Arc<H2ConnectionPool>>,
) -> anyhow::Result<Self> {
let pre_request = args
.build_pre_request_header()
.context("failed to build request header")?;
Ok(H2TaskContext {
args: Arc::clone(args),
proc_args: Arc::clone(proc_args),
pool,
h2s: None,
reuse_conn_count: 0,
pre_request,
runtime_stats: Arc::clone(runtime_stats),
histogram_recorder,
})
}
fn drop_connection(&mut self) {
self.h2s = None;
}
async fn fetch_stream(&mut self) -> anyhow::Result<SendRequest<Bytes>> {
if let Some(pool) = &self.pool {
return pool.fetch_stream().await;
}
if let Some(h2s) = self.h2s.clone() {
if let Ok(ups_send_req) = h2s.ready().await {
self.reuse_conn_count += 1;
return Ok(ups_send_req);
}
}
if self.reuse_conn_count > 0 {
if let Some(r) = &mut self.histogram_recorder {
r.record_conn_reuse_count(self.reuse_conn_count);
}
self.reuse_conn_count = 0;
}
self.runtime_stats.add_conn_attempt();
let h2s = match tokio::time::timeout(
self.args.connect_timeout,
self.args
.new_h2_connection(&self.runtime_stats, &self.proc_args),
)
.await
{
Ok(Ok(h2s)) => h2s,
Ok(Err(e)) => return Err(e),
Err(_) => return Err(anyhow!("timeout to get new connection")),
};
self.runtime_stats.add_conn_success();
let s = h2s
.clone()
.ready()
.await
.map_err(|e| anyhow!("failed to open new stream on new connection: {e:?}"))?;
self.h2s = Some(h2s);
Ok(s)
}
async fn run_with_stream(
&mut self,
time_started: Instant,
mut send_req: SendRequest<Bytes>,
) -> anyhow::Result<()> {
let req = self
.pre_request
.build_request()
.context("failed to build request header")?;
// send hdr
let (rsp_fut, _) = send_req
.send_request(req, true)
.map_err(|e| anyhow!("failed to send request: {e:?}"))?;
let send_hdr_time = time_started.elapsed();
if let Some(r) = &mut self.histogram_recorder {
r.record_send_hdr_time(send_hdr_time);
}
// recv hdr
let rsp = match tokio::time::timeout(self.args.timeout, rsp_fut).await {
Ok(Ok(rsp)) => rsp,
Ok(Err(e)) => return Err(anyhow!("failed to read response: {e}")),
Err(_) => return Err(anyhow!("timeout to read response")),
};
let (rsp, mut rsp_recv_body) = rsp.into_parts();
let recv_hdr_time = time_started.elapsed();
if let Some(r) = &mut self.histogram_recorder {
r.record_recv_hdr_time(recv_hdr_time);
}
if let Some(ok_status) = self.args.ok_status {
if rsp.status != ok_status {
return Err(anyhow!(
"Got rsp code {} while {} is expected",
rsp.status.as_u16(),
ok_status.as_u16()
));
}
}
// recv body
if !rsp_recv_body.is_end_stream() {
while let Some(r) = rsp_recv_body.data().await {
if let Err(e) = r {
return Err(anyhow!("failed to recv rsp body: {e:?}"));
}
}
let _ = rsp_recv_body
.trailers()
.await
.map_err(|e| anyhow!("failed to recv rsp trailers: {e:?}"))?;
}
Ok(())
}
}
#[async_trait]
impl BenchTaskContext for H2TaskContext {
fn mark_task_start(&self) {
self.runtime_stats.add_task_total();
self.runtime_stats.inc_task_alive();
}
fn mark_task_passed(&self) {
self.runtime_stats.add_task_passed();
self.runtime_stats.dec_task_alive();
}
fn mark_task_failed(&self) {
self.runtime_stats.add_task_failed();
self.runtime_stats.dec_task_alive();
}
async fn run(&mut self, _task_id: usize, time_started: Instant) -> anyhow::Result<()> {
let send_req = self
.fetch_stream()
.await
.context("fetch new stream failed")?;
match self.run_with_stream(time_started, send_req).await {
Ok(_) => {
let total_time = time_started.elapsed();
if let Some(r) = &mut self.histogram_recorder {
r.record_total_time(total_time);
}
if self.args.no_multiplex {
self.drop_connection();
}
Ok(())
}
Err(e) => {
self.drop_connection();
Err(e)
}
}
}
}

View file

@ -0,0 +1,18 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
mod stats;
pub(super) use stats::{HttpHistogram, HttpHistogramRecorder, HttpRuntimeStats};

View file

@ -0,0 +1,143 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::time::Duration;
use cadence::{Gauged, StatsdClient};
use hdrhistogram::{sync::Recorder, Histogram, SyncHistogram};
use g3_types::ext::DurationExt;
use crate::target::BenchHistogram;
pub(crate) struct HttpHistogram {
send_hdr_time: SyncHistogram<u64>,
recv_hdr_time: SyncHistogram<u64>,
total_time: SyncHistogram<u64>,
conn_reuse_count: SyncHistogram<u64>,
}
impl HttpHistogram {
pub(crate) fn new() -> Self {
HttpHistogram {
send_hdr_time: Histogram::new(3).unwrap().into_sync(),
recv_hdr_time: Histogram::new(3).unwrap().into_sync(),
total_time: Histogram::new(3).unwrap().into_sync(),
conn_reuse_count: Histogram::new(3).unwrap().into_sync(),
}
}
pub(crate) fn recorder(&self) -> HttpHistogramRecorder {
HttpHistogramRecorder {
send_hdr_time: self.send_hdr_time.recorder(),
recv_hdr_time: self.recv_hdr_time.recorder(),
total_time: self.total_time.recorder(),
conn_reuse_count: self.conn_reuse_count.recorder(),
}
}
}
impl BenchHistogram for HttpHistogram {
fn refresh(&mut self) {
self.send_hdr_time.refresh();
self.recv_hdr_time.refresh();
self.total_time.refresh();
self.conn_reuse_count.refresh();
}
fn emit(&self, client: &StatsdClient) {
macro_rules! emit_histogram {
($field:ident, $name:literal) => {
let min = self.$field.min();
client
.gauge_with_tags(concat!("h1.", $name, ".min"), min)
.send();
let max = self.$field.max();
client
.gauge_with_tags(concat!("h1.", $name, ".max"), max)
.send();
let mean = self.$field.mean();
client
.gauge_with_tags(concat!("h1.", $name, ".mean"), mean)
.send();
let pct50 = self.$field.value_at_percentile(0.50);
client
.gauge_with_tags(concat!("h1.", $name, ".pct50"), pct50)
.send();
let pct80 = self.$field.value_at_percentile(0.80);
client
.gauge_with_tags(concat!("h1.", $name, ".pct80"), pct80)
.send();
let pct90 = self.$field.value_at_percentile(0.90);
client
.gauge_with_tags(concat!("h1.", $name, ".pct90"), pct90)
.send();
let pct95 = self.$field.value_at_percentile(0.95);
client
.gauge_with_tags(concat!("h1.", $name, ".pct95"), pct95)
.send();
let pct98 = self.$field.value_at_percentile(0.98);
client
.gauge_with_tags(concat!("h1.", $name, ".pct98"), pct98)
.send();
let pct99 = self.$field.value_at_percentile(0.99);
client
.gauge_with_tags(concat!("h1.", $name, ".pct99"), pct99)
.send();
};
}
emit_histogram!(send_hdr_time, "time.send_hdr");
emit_histogram!(recv_hdr_time, "time.recv_hdr");
emit_histogram!(total_time, "time.total");
}
fn summary(&self) {
Self::summary_histogram_title("# Connection Re-Usage:");
Self::summary_data_line("Req/Conn:", &self.conn_reuse_count);
Self::summary_histogram_title("# Duration Times");
Self::summary_duration_line("SendHdr:", &self.send_hdr_time);
Self::summary_duration_line("RecvHdr:", &self.recv_hdr_time);
Self::summary_duration_line("Total:", &self.total_time);
Self::summary_newline();
Self::summary_total_percentage(&self.total_time);
}
}
pub(crate) struct HttpHistogramRecorder {
send_hdr_time: Recorder<u64>,
recv_hdr_time: Recorder<u64>,
total_time: Recorder<u64>,
conn_reuse_count: Recorder<u64>,
}
impl HttpHistogramRecorder {
pub(crate) fn record_send_hdr_time(&mut self, dur: Duration) {
let _ = self.send_hdr_time.record(dur.as_nanos_u64());
}
pub(crate) fn record_recv_hdr_time(&mut self, dur: Duration) {
let _ = self.recv_hdr_time.record(dur.as_nanos_u64());
}
pub(crate) fn record_total_time(&mut self, dur: Duration) {
let _ = self.total_time.record(dur.as_nanos_u64());
}
pub(crate) fn record_conn_reuse_count(&mut self, count: u64) {
let _ = self.conn_reuse_count.record(count);
}
}

View file

@ -0,0 +1,21 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
mod histogram;
mod runtime;
pub(crate) use histogram::{HttpHistogram, HttpHistogramRecorder};
pub(crate) use runtime::HttpRuntimeStats;

View file

@ -0,0 +1,186 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::atomic::{AtomicI64, AtomicU64, Ordering};
use std::time::Duration;
use cadence::{Counted, Gauged, StatsdClient};
use g3_io_ext::{LimitedReaderStats, LimitedWriterStats};
use crate::target::BenchRuntimeStats;
pub(crate) struct HttpRuntimeStats {
target: &'static str,
task_total: AtomicU64,
task_alive: AtomicI64,
task_passed: AtomicU64,
task_failed: AtomicU64,
conn_attempt: AtomicU64,
conn_attempt_total: AtomicU64,
conn_success: AtomicU64,
conn_success_total: AtomicU64,
conn_close_error: AtomicU64,
conn_close_timeout: AtomicU64,
tcp_read: AtomicU64,
tcp_write: AtomicU64,
tcp_read_total: AtomicU64,
tcp_write_total: AtomicU64,
}
impl HttpRuntimeStats {
pub(crate) fn new(target: &'static str) -> Self {
HttpRuntimeStats {
target,
task_total: AtomicU64::new(0),
task_alive: AtomicI64::new(0),
task_passed: AtomicU64::new(0),
task_failed: AtomicU64::new(0),
conn_attempt: AtomicU64::new(0),
conn_attempt_total: AtomicU64::new(0),
conn_success: AtomicU64::new(0),
conn_success_total: AtomicU64::new(0),
conn_close_error: AtomicU64::new(0),
conn_close_timeout: AtomicU64::new(0),
tcp_read: AtomicU64::new(0),
tcp_write: AtomicU64::new(0),
tcp_read_total: AtomicU64::new(0),
tcp_write_total: AtomicU64::new(0),
}
}
pub(crate) fn add_task_total(&self) {
self.task_total.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn inc_task_alive(&self) {
self.task_alive.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn dec_task_alive(&self) {
self.task_alive.fetch_sub(1, Ordering::Relaxed);
}
pub(crate) fn add_task_passed(&self) {
self.task_passed.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn add_task_failed(&self) {
self.task_failed.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn add_conn_attempt(&self) {
self.conn_attempt.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn add_conn_success(&self) {
self.conn_success.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn add_conn_close_fail(&self) {
self.conn_close_error.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn add_conn_close_timeout(&self) {
self.conn_close_timeout.fetch_add(1, Ordering::Relaxed);
}
}
impl LimitedReaderStats for HttpRuntimeStats {
fn add_read_bytes(&self, size: usize) {
self.tcp_read.fetch_add(size as u64, Ordering::Relaxed);
}
}
impl LimitedWriterStats for HttpRuntimeStats {
fn add_write_bytes(&self, size: usize) {
self.tcp_write.fetch_add(size as u64, Ordering::Relaxed);
}
}
impl BenchRuntimeStats for HttpRuntimeStats {
fn emit(&self, client: &StatsdClient) {
const TAG_NAME_TARGET: &str = "target";
macro_rules! emit_count {
($field:ident, $name:literal) => {
let $field = self.$field.swap(0, Ordering::Relaxed);
let v = i64::try_from($field).unwrap_or(i64::MAX);
client
.count_with_tags(concat!("http.", $name), v)
.with_tag(TAG_NAME_TARGET, self.target)
.send();
};
}
let task_alive = self.task_alive.load(Ordering::Relaxed);
client
.gauge_with_tags("http.task.alive", task_alive as f64)
.with_tag(TAG_NAME_TARGET, self.target)
.send();
emit_count!(task_total, "task.total");
emit_count!(task_passed, "task.passed");
emit_count!(task_failed, "task.failed");
emit_count!(conn_attempt, "connection.attempt");
self.conn_attempt_total
.fetch_add(conn_attempt, Ordering::Relaxed);
emit_count!(conn_success, "connection.success");
self.conn_success_total
.fetch_add(conn_success, Ordering::Relaxed);
emit_count!(tcp_write, "io.tcp.write");
self.tcp_write_total.fetch_add(tcp_write, Ordering::Relaxed);
emit_count!(tcp_read, "io.tcp.read");
self.tcp_read_total.fetch_add(tcp_read, Ordering::Relaxed);
}
fn summary(&self, total_time: Duration) {
let total_secs = total_time.as_secs_f64();
println!("# Connections");
let total_attempt = self.conn_attempt_total.load(Ordering::Relaxed)
+ self.conn_attempt.load(Ordering::Relaxed);
println!("Attempt count: {total_attempt}");
let total_success = self.conn_success_total.load(Ordering::Relaxed)
+ self.conn_success.load(Ordering::Relaxed);
println!("Success count: {total_success}");
println!(
"Success ratio: {:.2}%",
(total_success as f64 / total_attempt as f64) * 100.0
);
println!("Success rate: {:.3}/s", total_success as f64 / total_secs);
let close_error = self.conn_close_error.load(Ordering::Relaxed);
if close_error > 0 {
println!("Close error: {close_error}");
}
let close_timeout = self.conn_close_timeout.load(Ordering::Relaxed);
if close_timeout > 0 {
println!("Close timeout: {close_timeout}");
}
println!("# Traffic");
let total_send =
self.tcp_write_total.load(Ordering::Relaxed) + self.tcp_write.load(Ordering::Relaxed);
println!("Send bytes: {total_send}");
println!("Send rate: {:.3}B/s", total_send as f64 / total_secs);
let total_recv =
self.tcp_read_total.load(Ordering::Relaxed) + self.tcp_read.load(Ordering::Relaxed);
println!("Recv bytes: {total_recv}");
println!("Recv rate: {:.3}B/s", total_recv as f64 / total_secs);
}
}

353
g3bench/src/target/mod.rs Normal file
View file

@ -0,0 +1,353 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::atomic::{AtomicBool, AtomicU64, Ordering};
use std::sync::Arc;
use std::time::Duration;
use anyhow::{anyhow, Context};
use async_trait::async_trait;
use cadence::StatsdClient;
use hdrhistogram::Histogram;
use tokio::signal::unix::SignalKind;
use tokio::sync::{mpsc, Barrier, Semaphore};
use tokio::time::Instant;
use g3_signal::{ActionSignal, SigResult};
use super::ProcArgs;
mod stats;
mod tls;
use tls::{AppendTlsArgs, OpensslTlsClientArgs};
mod http;
pub mod h1;
pub mod h2;
pub mod ssl;
trait BenchHistogram {
fn refresh(&mut self);
fn emit(&self, client: &StatsdClient);
fn summary(&self);
fn summary_histogram_title(title: &str) {
println!("{title}");
println!(" min mean[+/-sd] pct90 max");
}
fn summary_newline() {
println!();
}
fn summary_data_line(name: &str, h: &Histogram<u64>) {
let d_min = h.min();
let d_mean = h.mean();
let d_std_dev = h.stdev();
let d_pct90 = h.value_at_quantile(0.9);
let d_max = h.max();
println!(
"{name:<10} {d_min:>9.3?} {d_mean:>9.3?} {d_std_dev:<9.3?} {d_pct90:>9.3?} {d_max:>9.3?}"
);
}
fn summary_duration_line(name: &str, h: &Histogram<u64>) {
const NANOS_PER_SEC: f64 = 1_000_000_000.0;
let t_min = Duration::from_nanos(h.min());
let t_mean = Duration::from_secs_f64(h.mean() / NANOS_PER_SEC);
let t_std_dev = Duration::from_secs_f64(h.stdev() / NANOS_PER_SEC);
let t_pct90 = Duration::from_nanos(h.value_at_quantile(0.9));
let t_max = Duration::from_nanos(h.max());
println!(
"{name:<10} {t_min:>9.3?} {t_mean:>9.3?} {t_std_dev:9.3?} {t_pct90:>9.3?} {t_max:>9.3?}"
);
}
fn summary_total_percentage(h: &Histogram<u64>) {
macro_rules! print_pct {
($pct:literal) => {
let v = Duration::from_nanos(h.value_at_percentile($pct as f64));
println!("{:4}% {v:8.3?}", $pct);
};
}
println!("Percentage of the requests served within a certain time");
print_pct!(50);
print_pct!(66);
print_pct!(75);
print_pct!(80);
print_pct!(90);
print_pct!(95);
print_pct!(98);
print_pct!(99);
print_pct!(100);
}
}
trait BenchRuntimeStats {
fn emit(&self, client: &StatsdClient);
fn summary(&self, total_time: Duration);
}
#[async_trait]
trait BenchTaskContext {
fn mark_task_start(&self);
fn mark_task_passed(&self);
fn mark_task_failed(&self);
async fn run(&mut self, task_id: usize, time_started: Instant) -> anyhow::Result<()>;
}
trait BenchTarget<RS, H, C>
where
RS: BenchRuntimeStats,
H: BenchHistogram,
C: BenchTaskContext,
{
fn new_context(&self) -> anyhow::Result<C>;
fn fetch_runtime_stats(&self) -> Arc<RS>;
fn take_histogram(&mut self) -> Option<H>;
fn notify_finish(&mut self) {}
}
fn quit_at_sigint(_count: u32) -> SigResult {
stats::mark_force_quit();
SigResult::Break
}
async fn run<RS, H, C, T>(mut target: T, proc_args: &ProcArgs) -> anyhow::Result<()>
where
RS: BenchRuntimeStats + Send + Sync + 'static,
H: BenchHistogram + Send + 'static,
C: BenchTaskContext + Send + 'static,
T: BenchTarget<RS, H, C> + Send + Sync + 'static,
{
let sync_sem = Arc::new(Semaphore::new(0));
let sync_barrier = Arc::new(Barrier::new(proc_args.concurrency + 1));
let (sender, mut receiver) = mpsc::channel::<usize>(proc_args.concurrency);
let progress_bar = proc_args.new_progress_bar();
let progress_bar_atomic = if progress_bar.is_some() {
Some(Arc::new(AtomicU64::new(0)))
} else {
None
};
stats::init_global_state(proc_args.requests, proc_args.log_error_count);
tokio::spawn(
ActionSignal::new(SignalKind::interrupt(), &quit_at_sigint)
.map_err(|e| anyhow!("failed to set handler for SIGINT: {e:?}"))?,
);
for i in 0..proc_args.concurrency {
let sem = Arc::clone(&sync_sem);
let barrier = Arc::clone(&sync_barrier);
let quit_sender = sender.clone();
let progress_bar_atomic = progress_bar_atomic.clone();
let mut context = target
.new_context()
.context(format!("failed to to create context #{i}"))?;
let task_unconstrained = proc_args.task_unconstrained;
let rt = super::worker::select_handle(i).unwrap_or_else(tokio::runtime::Handle::current);
rt.spawn(async move {
sem.add_permits(1);
barrier.wait().await;
let global_state = stats::global_state();
let mut req_count = 0;
while let Some(task_id) = global_state.fetch_request() {
let time_start = Instant::now();
context.mark_task_start();
let rt = if task_unconstrained {
tokio::task::unconstrained(context.run(task_id, time_start)).await
} else {
context.run(task_id, time_start).await
};
match rt {
Ok(_) => {
context.mark_task_passed();
if let Some(bar_atomic) = &progress_bar_atomic {
bar_atomic.fetch_add(1, Ordering::Relaxed);
}
global_state.add_passed();
}
Err(e) => {
context.mark_task_failed();
if global_state.check_log_error() {
eprintln!("! request {task_id} failed: {e:?}\n");
}
global_state.add_failed();
}
}
req_count += 1;
}
drop(context);
if let Err(e) = quit_sender.send(req_count).await {
eprintln!("failed to send quit signal: {e}");
}
});
}
drop(sender);
let _run_permit = sync_sem
.acquire_many(proc_args.concurrency as u32)
.await
.context("failed to start all task contexts")?;
let quit_notifier = Arc::new(AtomicBool::new(false));
// progress bar
let progress_bar_handler = if let Some(progress_bar) = progress_bar {
if let Some(progress_bar_atomic) = progress_bar_atomic.clone() {
let quit_notifier = quit_notifier.clone();
let handler = std::thread::Builder::new()
.name("progress-bar".to_string())
.spawn(move || {
loop {
progress_bar.inc(progress_bar_atomic.swap(0, Ordering::Relaxed));
if quit_notifier.load(Ordering::Relaxed) {
break;
}
std::thread::sleep(Duration::from_millis(100));
}
progress_bar
})
.map_err(|e| anyhow!("failed to create progress bar thread: {e}"))?;
Some(handler)
} else {
None
}
} else {
None
};
// simple runtime stats
let runtime_stats_handler =
if let Some((statsd_client, emit_duration)) = proc_args.new_statsd_client() {
let runtime_stats = target.fetch_runtime_stats();
let quit_notifier = quit_notifier.clone();
let handler = std::thread::Builder::new()
.name("runtime-stats".to_string())
.spawn(move || loop {
runtime_stats.emit(&statsd_client);
statsd_client.flush_sink();
if quit_notifier.load(Ordering::Relaxed) {
break;
}
std::thread::sleep(emit_duration);
})
.map_err(|e| anyhow!("failed to create runtime stats thread: {e}"))?;
Some(handler)
} else {
None
};
// histogram runtime stats
let histogram_stats_handler = if let Some(mut histogram) = target.take_histogram() {
let quit_notifier = quit_notifier.clone();
let thread_builder = std::thread::Builder::new().name("histogram".to_string());
if let Some((statsd_client, emit_duration)) = proc_args.new_statsd_client() {
let handler = thread_builder
.spawn(move || {
loop {
histogram.refresh();
histogram.emit(&statsd_client);
if quit_notifier.load(Ordering::Relaxed) {
break;
}
std::thread::sleep(emit_duration);
}
histogram
})
.map_err(|e| anyhow!("failed to create histogram metrics thread: {e}"))?;
Some(handler)
} else {
let handler = thread_builder
.spawn(move || {
loop {
histogram.refresh();
if quit_notifier.load(Ordering::Relaxed) {
break;
}
std::thread::sleep(Duration::from_millis(100));
}
histogram
})
.map_err(|e| anyhow!("failed to create histogram refresh thread: {e}"))?;
Some(handler)
}
} else {
None
};
let time_start = Instant::now();
sync_barrier.wait().await;
if let Some(time_limit) = proc_args.time_limit {
tokio::spawn(async move {
tokio::time::sleep(time_limit).await;
stats::mark_force_quit();
});
}
let mut distribute_histogram = Histogram::<u64>::new(3).unwrap();
while let Some(req_count) = receiver.recv().await {
distribute_histogram.record(req_count as u64).unwrap();
}
let total_time = time_start.elapsed();
quit_notifier.store(true, Ordering::Relaxed);
if let Some(handler) = progress_bar_handler {
match handler.join() {
Ok(bar) => bar.finish_and_clear(),
Err(e) => eprintln!("error to join progress bar thread: {e:?}"),
}
}
stats::global_state().summary(total_time, &distribute_histogram);
if let Some(handler) = runtime_stats_handler {
let _ = handler.join();
}
H::summary_newline();
target.notify_finish();
target.fetch_runtime_stats().summary(total_time);
if let Some(handler) = histogram_stats_handler {
match handler.join() {
Ok(mut histogram) => {
histogram.refresh();
histogram.summary();
}
Err(e) => eprintln!("error to join histogram stats thread: {e:?}"),
}
}
Ok(())
}

View file

@ -0,0 +1,72 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::Arc;
use clap::{ArgMatches, Command};
use super::{BenchTarget, BenchTaskContext, ProcArgs};
mod opts;
use opts::BenchSslArgs;
mod stats;
use stats::{SslHistogram, SslHistogramRecorder, SslRuntimeStats};
mod task;
use task::SslTaskContext;
pub const COMMAND: &str = "ssl";
struct SslTarget {
args: Arc<BenchSslArgs>,
proc_args: Arc<ProcArgs>,
stats: Arc<SslRuntimeStats>,
histogram: Option<SslHistogram>,
}
impl BenchTarget<SslRuntimeStats, SslHistogram, SslTaskContext> for SslTarget {
fn new_context(&self) -> anyhow::Result<SslTaskContext> {
let histogram_recorder = self.histogram.as_ref().map(|h| h.recorder());
SslTaskContext::new(&self.args, &self.proc_args, &self.stats, histogram_recorder)
}
fn fetch_runtime_stats(&self) -> Arc<SslRuntimeStats> {
self.stats.clone()
}
fn take_histogram(&mut self) -> Option<SslHistogram> {
self.histogram.take()
}
}
pub fn command() -> Command {
opts::add_ssl_args(Command::new(COMMAND))
}
pub async fn run(proc_args: &Arc<ProcArgs>, cmd_args: &ArgMatches) -> anyhow::Result<()> {
let mut ssl_args = opts::parse_ssl_args(cmd_args)?;
ssl_args.resolve_target_address(proc_args).await?;
let target = SslTarget {
args: Arc::new(ssl_args),
proc_args: Arc::clone(proc_args),
stats: Arc::new(SslRuntimeStats::default()),
histogram: Some(SslHistogram::new()),
};
super::run(target, proc_args).await
}

View file

@ -0,0 +1,185 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::borrow::Cow;
use std::net::{IpAddr, SocketAddr};
use std::pin::Pin;
use std::time::Duration;
use anyhow::{anyhow, Context};
use clap::{value_parser, Arg, ArgMatches, Command};
use openssl::ssl::SslVerifyMode;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio::net::TcpStream;
use tokio_openssl::SslStream;
use g3_types::collection::{SelectiveVec, WeightedValue};
use g3_types::net::{OpensslTlsClientConfig, OpensslTlsClientConfigBuilder, UpstreamAddr};
use super::ProcArgs;
use crate::target::{AppendTlsArgs, OpensslTlsClientArgs};
const SSL_ARG_TARGET: &str = "target";
const SSL_ARG_LOCAL_ADDRESS: &str = "local-address";
const SSL_ARG_TIMEOUT: &str = "timeout";
const SSL_ARG_CONNECT_TIMEOUT: &str = "connect-timeout";
pub(super) struct BenchSslArgs {
target: UpstreamAddr,
bind: Option<IpAddr>,
pub(super) timeout: Duration,
pub(super) connect_timeout: Duration,
pub(super) tls: OpensslTlsClientArgs,
target_addrs: SelectiveVec<WeightedValue<SocketAddr>>,
}
impl BenchSslArgs {
fn new(target: UpstreamAddr) -> Self {
let tls = OpensslTlsClientArgs {
config: Some(OpensslTlsClientConfigBuilder::with_cache_for_one_site()),
..Default::default()
};
BenchSslArgs {
target,
bind: None,
timeout: Duration::from_secs(10),
connect_timeout: Duration::from_secs(10),
tls,
target_addrs: SelectiveVec::empty(),
}
}
pub(super) async fn resolve_target_address(
&mut self,
proc_args: &ProcArgs,
) -> anyhow::Result<()> {
self.target_addrs = proc_args.resolve(&self.target).await?;
Ok(())
}
pub(super) async fn new_tcp_connection(
&self,
proc_args: &ProcArgs,
) -> anyhow::Result<TcpStream> {
let peer = *proc_args.select_peer(&self.target_addrs);
let socket = g3_socket::tcp::new_socket_to(
peer.ip(),
self.bind,
&Default::default(),
&Default::default(),
true,
)
.map_err(|e| anyhow!("failed to setup socket to peer {peer}: {e:?}"))?;
socket
.connect(peer)
.await
.map_err(|e| anyhow!("connect to {peer} error: {e:?}"))
}
pub(super) async fn tls_connect_to_target<S>(
&self,
tls_client: &OpensslTlsClientConfig,
stream: S,
) -> anyhow::Result<SslStream<S>>
where
S: AsyncRead + AsyncWrite + Unpin + Send + 'static,
{
let tls_name = self
.tls
.tls_name
.as_ref()
.map(|v| Cow::Borrowed(v.as_str()))
.unwrap_or_else(|| self.target.host_str());
let mut ssl = tls_client
.build_ssl(&tls_name, self.target.port())
.context("failed to build ssl context")?;
if self.tls.no_verify {
ssl.set_verify(SslVerifyMode::NONE);
}
let mut tls_stream = SslStream::new(ssl, stream)
.map_err(|e| anyhow!("tls connect to {tls_name} failed: {e}"))?;
Pin::new(&mut tls_stream)
.connect()
.await
.map_err(|e| anyhow!("tls connect to {tls_name} failed: {e}"))?;
Ok(tls_stream)
}
}
pub(super) fn add_ssl_args(app: Command) -> Command {
app.arg(
Arg::new(SSL_ARG_TARGET)
.required(true)
.num_args(1)
.value_parser(value_parser!(UpstreamAddr)),
)
.arg(
Arg::new(SSL_ARG_LOCAL_ADDRESS)
.value_name("LOCAL IP ADDRESS")
.short('B')
.long(SSL_ARG_LOCAL_ADDRESS)
.num_args(1)
.value_parser(value_parser!(IpAddr)),
)
.arg(
Arg::new(SSL_ARG_TIMEOUT)
.value_name("TIMEOUT DURATION")
.help("SSL handshake timeout")
.default_value("10s")
.long(SSL_ARG_TIMEOUT)
.num_args(1),
)
.arg(
Arg::new(SSL_ARG_CONNECT_TIMEOUT)
.value_name("TIMEOUT DURATION")
.help("Timeout for connection to next peer")
.default_value("10s")
.long(SSL_ARG_CONNECT_TIMEOUT)
.num_args(1),
)
.append_tls_args()
}
pub(super) fn parse_ssl_args(args: &ArgMatches) -> anyhow::Result<BenchSslArgs> {
let target = if let Some(v) = args.get_one::<UpstreamAddr>(SSL_ARG_TARGET) {
v.clone()
} else {
return Err(anyhow!("no target set"));
};
let mut ssl_args = BenchSslArgs::new(target);
if let Some(ip) = args.get_one::<IpAddr>(SSL_ARG_LOCAL_ADDRESS) {
ssl_args.bind = Some(*ip);
}
if let Some(timeout) = g3_clap::humanize::get_duration(args, SSL_ARG_TIMEOUT)? {
ssl_args.timeout = timeout;
}
if let Some(timeout) = g3_clap::humanize::get_duration(args, SSL_ARG_CONNECT_TIMEOUT)? {
ssl_args.connect_timeout = timeout;
}
ssl_args
.tls
.parse_tls_args(args)
.context("invalid tls config")?;
Ok(ssl_args)
}

View file

@ -0,0 +1,110 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::time::Duration;
use cadence::{Gauged, StatsdClient};
use hdrhistogram::{sync::Recorder, Histogram, SyncHistogram};
use g3_types::ext::DurationExt;
use crate::target::BenchHistogram;
pub(crate) struct SslHistogram {
total_time: SyncHistogram<u64>,
}
impl SslHistogram {
pub(crate) fn new() -> Self {
SslHistogram {
total_time: Histogram::new(3).unwrap().into_sync(),
}
}
pub(crate) fn recorder(&self) -> SslHistogramRecorder {
SslHistogramRecorder {
total_time: self.total_time.recorder(),
}
}
}
impl BenchHistogram for SslHistogram {
fn refresh(&mut self) {
self.total_time.refresh();
}
fn emit(&self, client: &StatsdClient) {
macro_rules! emit_histogram {
($field:ident, $name:literal) => {
let min = self.$field.min();
client
.gauge_with_tags(concat!("h1.", $name, ".min"), min)
.send();
let max = self.$field.max();
client
.gauge_with_tags(concat!("h1.", $name, ".max"), max)
.send();
let mean = self.$field.mean();
client
.gauge_with_tags(concat!("h1.", $name, ".mean"), mean)
.send();
let pct50 = self.$field.value_at_percentile(0.50);
client
.gauge_with_tags(concat!("h1.", $name, ".pct50"), pct50)
.send();
let pct80 = self.$field.value_at_percentile(0.80);
client
.gauge_with_tags(concat!("h1.", $name, ".pct80"), pct80)
.send();
let pct90 = self.$field.value_at_percentile(0.90);
client
.gauge_with_tags(concat!("h1.", $name, ".pct90"), pct90)
.send();
let pct95 = self.$field.value_at_percentile(0.95);
client
.gauge_with_tags(concat!("h1.", $name, ".pct95"), pct95)
.send();
let pct98 = self.$field.value_at_percentile(0.98);
client
.gauge_with_tags(concat!("h1.", $name, ".pct98"), pct98)
.send();
let pct99 = self.$field.value_at_percentile(0.99);
client
.gauge_with_tags(concat!("h1.", $name, ".pct99"), pct99)
.send();
};
}
emit_histogram!(total_time, "time.total");
}
fn summary(&self) {
Self::summary_histogram_title("# Duration Times");
Self::summary_duration_line("Total:", &self.total_time);
Self::summary_newline();
Self::summary_total_percentage(&self.total_time);
}
}
pub(crate) struct SslHistogramRecorder {
total_time: Recorder<u64>,
}
impl SslHistogramRecorder {
pub(crate) fn record_total_time(&mut self, dur: Duration) {
let _ = self.total_time.record(dur.as_nanos_u64());
}
}

View file

@ -0,0 +1,21 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
mod runtime;
pub(crate) use runtime::SslRuntimeStats;
mod histogram;
pub(crate) use histogram::{SslHistogram, SslHistogramRecorder};

View file

@ -0,0 +1,159 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::atomic::{AtomicI64, AtomicU64, Ordering};
use std::time::Duration;
use cadence::{Counted, Gauged, StatsdClient};
use g3_io_ext::{LimitedReaderStats, LimitedWriterStats};
use crate::target::BenchRuntimeStats;
#[derive(Default)]
pub(crate) struct SslRuntimeStats {
task_total: AtomicU64,
task_alive: AtomicI64,
task_passed: AtomicU64,
task_failed: AtomicU64,
conn_attempt: AtomicU64,
conn_attempt_total: AtomicU64,
conn_success: AtomicU64,
conn_success_total: AtomicU64,
conn_close_error: AtomicU64,
conn_close_timeout: AtomicU64,
tcp_read: AtomicU64,
tcp_write: AtomicU64,
tcp_read_total: AtomicU64,
tcp_write_total: AtomicU64,
}
impl SslRuntimeStats {
pub(crate) fn add_task_total(&self) {
self.task_total.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn inc_task_alive(&self) {
self.task_alive.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn dec_task_alive(&self) {
self.task_alive.fetch_sub(1, Ordering::Relaxed);
}
pub(crate) fn add_task_passed(&self) {
self.task_passed.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn add_task_failed(&self) {
self.task_failed.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn add_conn_attempt(&self) {
self.conn_attempt.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn add_conn_success(&self) {
self.conn_success.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn add_conn_close_fail(&self) {
self.conn_close_error.fetch_add(1, Ordering::Relaxed);
}
pub(crate) fn add_conn_close_timeout(&self) {
self.conn_close_timeout.fetch_add(1, Ordering::Relaxed);
}
}
impl LimitedReaderStats for SslRuntimeStats {
fn add_read_bytes(&self, size: usize) {
self.tcp_read.fetch_add(size as u64, Ordering::Relaxed);
}
}
impl LimitedWriterStats for SslRuntimeStats {
fn add_write_bytes(&self, size: usize) {
self.tcp_write.fetch_add(size as u64, Ordering::Relaxed);
}
}
impl BenchRuntimeStats for SslRuntimeStats {
fn emit(&self, client: &StatsdClient) {
macro_rules! emit_count {
($field:ident, $name:literal) => {
let $field = self.$field.swap(0, Ordering::Relaxed);
let v = i64::try_from($field).unwrap_or(i64::MAX);
client.count_with_tags(concat!("ssl.", $name), v).send();
};
}
let task_alive = self.task_alive.load(Ordering::Relaxed);
client
.gauge_with_tags("ssl.task.alive", task_alive as f64)
.send();
emit_count!(task_total, "task.total");
emit_count!(task_passed, "task.passed");
emit_count!(task_failed, "task.failed");
emit_count!(conn_attempt, "connection.attempt");
self.conn_attempt_total
.fetch_add(conn_attempt, Ordering::Relaxed);
emit_count!(conn_success, "connection.success");
self.conn_success_total
.fetch_add(conn_success, Ordering::Relaxed);
emit_count!(tcp_write, "io.tcp.write");
self.tcp_write_total.fetch_add(tcp_write, Ordering::Relaxed);
emit_count!(tcp_read, "io.tcp.read");
self.tcp_read_total.fetch_add(tcp_read, Ordering::Relaxed);
}
fn summary(&self, total_time: Duration) {
let total_secs = total_time.as_secs_f64();
println!("# Connections");
let total_attempt = self.conn_attempt_total.load(Ordering::Relaxed)
+ self.conn_attempt.load(Ordering::Relaxed);
println!("Attempt count: {total_attempt}");
let total_success = self.conn_success_total.load(Ordering::Relaxed)
+ self.conn_success.load(Ordering::Relaxed);
println!("Success count: {total_success}");
println!(
"Success ratio: {:.2}%",
(total_success as f64 / total_attempt as f64) * 100.0
);
println!("Success rate: {:.3}/s", total_success as f64 / total_secs);
let close_error = self.conn_close_error.load(Ordering::Relaxed);
if close_error > 0 {
println!("Close error: {close_error}");
}
let close_timeout = self.conn_close_timeout.load(Ordering::Relaxed);
if close_timeout > 0 {
println!("Close timeout: {close_timeout}");
}
println!("# Traffic");
let total_send =
self.tcp_write_total.load(Ordering::Relaxed) + self.tcp_write.load(Ordering::Relaxed);
println!("Send bytes: {total_send}");
println!("Send rate: {:.3}B/s", total_send as f64 / total_secs);
let total_recv =
self.tcp_read_total.load(Ordering::Relaxed) + self.tcp_read.load(Ordering::Relaxed);
println!("Recv bytes: {total_recv}");
println!("Recv rate: {:.3}B/s", total_recv as f64 / total_secs);
}
}

View file

@ -0,0 +1,125 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::sync::Arc;
use std::time::Duration;
use anyhow::anyhow;
use async_trait::async_trait;
use tokio::io::AsyncWriteExt;
use tokio::net::TcpStream;
use tokio::time::Instant;
use g3_io_ext::LimitedStream;
use super::{BenchSslArgs, BenchTaskContext, ProcArgs, SslHistogramRecorder, SslRuntimeStats};
pub(super) struct SslTaskContext {
args: Arc<BenchSslArgs>,
proc_args: Arc<ProcArgs>,
runtime_stats: Arc<SslRuntimeStats>,
histogram_recorder: Option<SslHistogramRecorder>,
}
impl SslTaskContext {
pub(super) fn new(
args: &Arc<BenchSslArgs>,
proc_args: &Arc<ProcArgs>,
runtime_stats: &Arc<SslRuntimeStats>,
histogram_recorder: Option<SslHistogramRecorder>,
) -> anyhow::Result<Self> {
Ok(SslTaskContext {
args: Arc::clone(args),
proc_args: Arc::clone(proc_args),
runtime_stats: Arc::clone(runtime_stats),
histogram_recorder,
})
}
async fn connect(&self) -> anyhow::Result<LimitedStream<TcpStream>> {
self.runtime_stats.add_conn_attempt();
let stream = match tokio::time::timeout(
self.args.connect_timeout,
self.args.new_tcp_connection(&self.proc_args),
)
.await
{
Ok(Ok(s)) => s,
Ok(Err(e)) => return Err(e),
Err(_) => return Err(anyhow!("timeout to get new connection")),
};
self.runtime_stats.add_conn_success();
let speed_limit = &self.proc_args.tcp_sock_speed_limit;
Ok(LimitedStream::new(
stream,
speed_limit.shift_millis,
speed_limit.max_south,
speed_limit.max_north,
self.runtime_stats.clone(),
))
}
}
#[async_trait]
impl BenchTaskContext for SslTaskContext {
fn mark_task_start(&self) {
self.runtime_stats.add_task_total();
self.runtime_stats.inc_task_alive();
}
fn mark_task_passed(&self) {
self.runtime_stats.add_task_passed();
self.runtime_stats.dec_task_alive();
}
fn mark_task_failed(&self) {
self.runtime_stats.add_task_failed();
self.runtime_stats.dec_task_alive();
}
async fn run(&mut self, _task_id: usize, time_started: Instant) -> anyhow::Result<()> {
let tcp_stream = self.connect().await?;
let tls_client = self.args.tls.client.as_ref().unwrap();
match tokio::time::timeout(
self.args.timeout,
self.args.tls_connect_to_target(tls_client, tcp_stream),
)
.await
{
Ok(Ok(mut tls_stream)) => {
let total_time = time_started.elapsed();
if let Some(r) = &mut self.histogram_recorder {
r.record_total_time(total_time);
}
let runtime_stats = self.runtime_stats.clone();
// make sure the tls ticket will be reused
match tokio::time::timeout(Duration::from_secs(4), tls_stream.shutdown()).await {
Ok(Ok(_)) => {}
Ok(Err(_e)) => runtime_stats.add_conn_close_fail(),
Err(_) => runtime_stats.add_conn_close_timeout(),
}
Ok(())
}
Ok(Err(e)) => Err(e),
Err(_) => Err(anyhow!("tls handshake timeout")),
}
}
}

163
g3bench/src/target/stats.rs Normal file
View file

@ -0,0 +1,163 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::ops::{Deref, DerefMut};
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::time::Duration;
use hdrhistogram::Histogram;
use once_cell::sync::Lazy;
static mut GLOBAL_STATE: Lazy<GlobalState> = Lazy::new(GlobalState::default);
pub(super) fn global_state() -> &'static GlobalState {
unsafe { GLOBAL_STATE.deref() }
}
pub(super) fn mark_force_quit() {
global_state().mark_force_quit();
}
pub(super) fn init_global_state(requests: Option<usize>, log_error_count: usize) {
let stats_mut = unsafe { GLOBAL_STATE.deref_mut() };
stats_mut.total_count = requests;
stats_mut
.total_left
.store(requests.unwrap_or_default(), Ordering::Relaxed);
stats_mut
.log_error_left
.store(log_error_count, Ordering::Relaxed);
}
pub(super) struct GlobalState {
total_count: Option<usize>,
force_quit: AtomicBool,
total_left: AtomicUsize,
total_passed: AtomicUsize,
total_failed: AtomicUsize,
log_error_left: AtomicUsize,
request_id: AtomicUsize,
}
impl Default for GlobalState {
fn default() -> Self {
GlobalState::new(None, 0)
}
}
impl GlobalState {
pub(super) fn new(requests: Option<usize>, log_error_count: usize) -> Self {
GlobalState {
total_count: requests,
force_quit: AtomicBool::new(false),
total_left: AtomicUsize::new(requests.unwrap_or_default()),
total_passed: AtomicUsize::default(),
total_failed: AtomicUsize::default(),
log_error_left: AtomicUsize::new(log_error_count),
request_id: AtomicUsize::default(),
}
}
fn mark_force_quit(&self) {
self.force_quit.store(true, Ordering::Relaxed);
}
pub(super) fn fetch_request(&self) -> Option<usize> {
if self.force_quit.load(Ordering::Relaxed) {
return None;
}
if self.total_count.is_some() {
let mut curr = self.total_left.load(Ordering::Acquire);
loop {
if curr == 0 {
return None;
}
match self.total_left.compare_exchange(
curr,
curr - 1,
Ordering::AcqRel,
Ordering::Acquire,
) {
Ok(_) => break,
Err(actual) => curr = actual,
}
}
}
Some(self.request_id.fetch_add(1, Ordering::Relaxed))
}
pub(super) fn check_log_error(&self) -> bool {
let mut curr = self.log_error_left.load(Ordering::Acquire);
loop {
if curr == 0 {
return false;
}
match self.log_error_left.compare_exchange(
curr,
curr - 1,
Ordering::AcqRel,
Ordering::Acquire,
) {
Ok(_) => return true,
Err(actual) => curr = actual,
}
}
}
pub(super) fn add_passed(&self) {
self.total_passed.fetch_add(1, Ordering::Relaxed);
}
pub(super) fn add_failed(&self) {
self.total_failed.fetch_add(1, Ordering::Relaxed);
}
pub(super) fn summary(&self, total_time: Duration, distribution: &Histogram<u64>) {
println!("Time taken for tests: {total_time:?}");
let passed = self.total_passed.load(Ordering::Relaxed);
println!("Complete requests: {passed:<10}");
let failed = self.total_failed.load(Ordering::Relaxed);
if failed > 0 {
println!("Failed requests: {failed}");
}
let left = self.total_left.load(Ordering::Relaxed);
if left > 0 {
println!("Left requests: {left}");
}
println!(
"Requests per second: {} [#/sec] (mean)",
passed as f64 / total_time.as_secs_f64()
);
println!("Requests distribution:");
println!(" min {}", distribution.min());
println!(
" mean {:.2}[+/- {:.2}]",
distribution.mean(),
distribution.stdev()
);
println!(" pct90 {}", distribution.value_at_percentile(90.0));
println!(" max {}", distribution.max());
}
}

View file

@ -0,0 +1,413 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::fs::File;
use std::io::Read;
use std::path::{Path, PathBuf};
use std::str::FromStr;
use anyhow::{anyhow, Context};
use clap::{value_parser, Arg, ArgAction, ArgMatches, Command, ValueHint};
use openssl::pkey::{PKey, Private};
use openssl::x509::X509;
use g3_types::net::{
OpensslCertificatePair, OpensslProtocol, OpensslTlsClientConfig, OpensslTlsClientConfigBuilder,
};
const TLS_ARG_CA_CERT: &str = "tls-ca-cert";
const TLS_ARG_CERT: &str = "tls-cert";
const TLS_ARG_KEY: &str = "tls-key";
const TLS_ARG_NAME: &str = "tls-name";
const TLS_ARG_SESSION_CACHE: &str = "tls-session-cache";
const TLS_ARG_NO_VERIFY: &str = "tls-no-verify";
const TLS_ARG_NO_SNI: &str = "tls-no-sni";
const TLS_ARG_PROTOCOL: &str = "tls-protocol";
const TLS_ARG_CIPHERS: &str = "tls-ciphers";
const PROXY_TLS_ARG_CA_CERT: &str = "proxy-tls-ca-cert";
const PROXY_TLS_ARG_CERT: &str = "proxy-tls-cert";
const PROXY_TLS_ARG_KEY: &str = "proxy-tls-key";
const PROXY_TLS_ARG_NAME: &str = "proxy-tls-name";
const PROXY_TLS_ARG_SESSION_CACHE: &str = "proxy-tls-session-cache";
const PROXY_TLS_ARG_NO_VERIFY: &str = "proxy-tls-no-verify";
const PROXY_TLS_ARG_NO_SNI: &str = "proxy-tls-no-sni";
const PROXY_TLS_ARG_PROTOCOL: &str = "proxy-tls-protocol";
const PROXY_TLS_ARG_CIPHERS: &str = "proxy-tls-ciphers";
const SESSION_CACHE_VALUES: [&str; 2] = ["off", "builtin"];
const PROTOCOL_VALUES: [&str; 2] = ["tls1.2", "tls1.3"];
pub(crate) trait AppendTlsArgs {
fn append_tls_args(self) -> Self;
fn append_proxy_tls_args(self) -> Self;
}
#[derive(Default)]
pub(crate) struct OpensslTlsClientArgs {
pub(crate) config: Option<OpensslTlsClientConfigBuilder>,
pub(crate) client: Option<OpensslTlsClientConfig>,
pub(crate) tls_name: Option<String>,
pub(crate) cert_pair: OpensslCertificatePair,
pub(crate) no_verify: bool,
}
impl OpensslTlsClientArgs {
fn parse_tls_name(&mut self, args: &ArgMatches, id: &str) {
if let Some(name) = args.get_one::<String>(id) {
self.tls_name = Some(name.to_string());
}
}
fn parse_ca_cert(&mut self, args: &ArgMatches, id: &str) -> anyhow::Result<()> {
let tls_config = self
.config
.as_mut()
.ok_or_else(|| anyhow!("no tls config found"))?;
if let Some(file) = args.get_one::<PathBuf>(id) {
let ca_certs = load_certs(file).context(format!(
"failed to load ca certs from file {}",
file.display()
))?;
tls_config
.set_ca_certificates(ca_certs)
.context("failed to set ca certificates")?;
}
Ok(())
}
fn parse_client_auth(
&mut self,
args: &ArgMatches,
cert_id: &str,
key_id: &str,
) -> anyhow::Result<()> {
if let Some(file) = args.get_one::<PathBuf>(cert_id) {
let cert = load_certs(file).context(format!(
"failed to load client certificate from file {}",
file.display()
))?;
self.cert_pair
.set_certificates(cert)
.context("failed to set client certificate")?;
}
if let Some(file) = args.get_one::<PathBuf>(key_id) {
let key = load_key(file).context(format!(
"failed to load client private key from file {}",
file.display()
))?;
self.cert_pair
.set_private_key(key)
.context("failed to set client private key")?;
}
Ok(())
}
fn parse_protocol_and_args(
&mut self,
args: &ArgMatches,
protocol_id: &str,
ciphers_id: &str,
) -> anyhow::Result<()> {
let tls_config = self
.config
.as_mut()
.ok_or_else(|| anyhow!("no tls config found"))?;
if let Some(protocol) = args.get_one::<String>(protocol_id) {
let protocol =
OpensslProtocol::from_str(protocol).context("invalid openssl protocol")?;
tls_config.set_protocol(protocol);
}
if let Some(ciphers) = args.get_one::<String>(ciphers_id) {
let ciphers = ciphers.split(':').map(|s| s.to_string()).collect();
tls_config.set_ciphers(ciphers);
}
Ok(())
}
fn parse_session_cache(&mut self, args: &ArgMatches, id: &str) -> anyhow::Result<()> {
let tls_config = self
.config
.as_mut()
.ok_or_else(|| anyhow!("no tls config found"))?;
match args.get_one::<String>(id).map(|s| s.as_str()) {
Some("off") => {
tls_config.set_no_session_cache();
Ok(())
}
Some("builtin") => {
tls_config.set_use_builtin_session_cache();
Ok(())
}
Some(s) => Err(anyhow!("unsupported session cache type {s}")),
None => Ok(()),
}
}
fn parse_no_verify(&mut self, args: &ArgMatches, id: &str) {
if args.get_flag(id) {
self.no_verify = true;
}
}
fn parse_no_sni(&mut self, args: &ArgMatches, id: &str) -> anyhow::Result<()> {
let tls_config = self
.config
.as_mut()
.ok_or_else(|| anyhow!("no tls config found"))?;
if args.get_flag(id) {
tls_config.set_disable_sni();
}
Ok(())
}
fn build_client(&mut self) -> anyhow::Result<()> {
let tls_config = self
.config
.as_mut()
.ok_or_else(|| anyhow!("no tls config found"))?;
if self.cert_pair.is_set() {
tls_config.set_cert_pair(self.cert_pair.clone());
}
tls_config.check().context("invalid tls config")?;
let tls_client = tls_config.build().context("failed to build tls client")?;
self.client = Some(tls_client);
Ok(())
}
pub(crate) fn parse_tls_args(&mut self, args: &ArgMatches) -> anyhow::Result<()> {
if self.config.is_none() {
return Ok(());
}
self.parse_tls_name(args, TLS_ARG_NAME);
self.parse_ca_cert(args, TLS_ARG_CA_CERT)?;
self.parse_client_auth(args, TLS_ARG_CERT, TLS_ARG_KEY)?;
self.parse_protocol_and_args(args, TLS_ARG_PROTOCOL, TLS_ARG_CIPHERS)?;
self.parse_session_cache(args, TLS_ARG_SESSION_CACHE)?;
self.parse_no_verify(args, TLS_ARG_NO_VERIFY);
self.parse_no_sni(args, TLS_ARG_NO_SNI)?;
self.build_client()
}
pub(crate) fn parse_proxy_tls_args(&mut self, args: &ArgMatches) -> anyhow::Result<()> {
if self.config.is_none() {
return Ok(());
}
self.parse_tls_name(args, PROXY_TLS_ARG_NAME);
self.parse_ca_cert(args, PROXY_TLS_ARG_CA_CERT)?;
self.parse_client_auth(args, PROXY_TLS_ARG_CERT, PROXY_TLS_ARG_KEY)?;
self.parse_protocol_and_args(args, PROXY_TLS_ARG_PROTOCOL, PROXY_TLS_ARG_CIPHERS)?;
self.parse_session_cache(args, PROXY_TLS_ARG_SESSION_CACHE)?;
self.parse_no_verify(args, PROXY_TLS_ARG_NO_VERIFY);
self.parse_no_sni(args, PROXY_TLS_ARG_NO_SNI)?;
self.build_client()
}
}
fn load_certs(path: &Path) -> anyhow::Result<Vec<X509>> {
const MAX_FILE_SIZE: usize = 4_000_000; // 4MB
let mut contents = String::with_capacity(MAX_FILE_SIZE);
let file =
File::open(path).map_err(|e| anyhow!("unable to open file {}: {e}", path.display()))?;
file.take(MAX_FILE_SIZE as u64)
.read_to_string(&mut contents)
.map_err(|e| anyhow!("failed to read contents of file {}: {e}", path.display()))?;
let certs = X509::stack_from_pem(contents.as_bytes())
.map_err(|e| anyhow!("invalid certificate file({}): {e}", path.display()))?;
if certs.is_empty() {
Err(anyhow!(
"no valid certificate found in file {}",
path.display()
))
} else {
Ok(certs)
}
}
fn load_key(path: &Path) -> anyhow::Result<PKey<Private>> {
const MAX_FILE_SIZE: usize = 256_000; // 256KB
let mut contents = String::with_capacity(MAX_FILE_SIZE);
let file =
File::open(path).map_err(|e| anyhow!("unable to open file {}: {e}", path.display()))?;
file.take(MAX_FILE_SIZE as u64)
.read_to_string(&mut contents)
.map_err(|e| anyhow!("failed to read contents of file {}: {e}", path.display()))?;
PKey::private_key_from_pem(contents.as_bytes())
.map_err(|e| anyhow!("invalid private key file({}): {e}", path.display()))
}
impl AppendTlsArgs for Command {
fn append_tls_args(self) -> Command {
append_tls_args(self)
}
fn append_proxy_tls_args(self) -> Command {
append_proxy_tls_args(self)
}
}
pub(crate) fn append_tls_args(cmd: Command) -> Command {
cmd.arg(
Arg::new(TLS_ARG_NAME)
.help("TLS verify name for target site")
.value_name("SERVER NAME")
.long(TLS_ARG_NAME)
.num_args(1),
)
.arg(
Arg::new(TLS_ARG_CA_CERT)
.help("TLS CA certificate file for target site")
.value_name("CA CERTIFICATE FILE")
.long(TLS_ARG_CA_CERT)
.num_args(1)
.value_hint(ValueHint::FilePath)
.value_parser(value_parser!(PathBuf)),
)
.arg(
Arg::new(TLS_ARG_CERT)
.help("TLS client certificate file for target site")
.value_name("CERTIFICATE FILE")
.long(TLS_ARG_CERT)
.num_args(1)
.value_hint(ValueHint::FilePath)
.value_parser(value_parser!(PathBuf))
.requires(TLS_ARG_KEY),
)
.arg(
Arg::new(TLS_ARG_KEY)
.help("TLS client private key file for target site")
.value_name("PRIVATE KEY FILE")
.long(TLS_ARG_KEY)
.num_args(1)
.value_hint(ValueHint::FilePath)
.value_parser(value_parser!(PathBuf))
.requires(TLS_ARG_CERT),
)
.arg(
Arg::new(TLS_ARG_SESSION_CACHE)
.help("Set TLS session cache type for target site")
.value_name("TYPE")
.long(TLS_ARG_SESSION_CACHE)
.num_args(1)
.value_parser(SESSION_CACHE_VALUES),
)
.arg(
Arg::new(TLS_ARG_NO_VERIFY)
.help("Skip TLS verify for target site")
.action(ArgAction::SetTrue)
.long(TLS_ARG_NO_VERIFY),
)
.arg(
Arg::new(TLS_ARG_NO_SNI)
.help("Disable TLS SNI for target site")
.action(ArgAction::SetTrue)
.long(TLS_ARG_NO_SNI),
)
.arg(
Arg::new(TLS_ARG_PROTOCOL)
.help("Set tls protocol for target site")
.value_name("PROTOCOL")
.long(TLS_ARG_PROTOCOL)
.value_parser(PROTOCOL_VALUES)
.num_args(1),
)
.arg(
Arg::new(TLS_ARG_CIPHERS)
.help("Set tls ciphers for target site")
.value_name("CIPHERS")
.long(TLS_ARG_CIPHERS)
.num_args(1)
.requires(TLS_ARG_PROTOCOL),
)
}
pub(crate) fn append_proxy_tls_args(cmd: Command) -> Command {
cmd.arg(
Arg::new(PROXY_TLS_ARG_NAME)
.help("TLS verify name for proxy")
.value_name("SERVER NAME")
.long(PROXY_TLS_ARG_NAME)
.num_args(1),
)
.arg(
Arg::new(PROXY_TLS_ARG_CA_CERT)
.help("TLS CA certificate file for proxy")
.value_name("CA CERTIFICATE FILE")
.long(PROXY_TLS_ARG_CA_CERT)
.num_args(1)
.value_hint(ValueHint::FilePath)
.value_parser(value_parser!(PathBuf)),
)
.arg(
Arg::new(PROXY_TLS_ARG_CERT)
.help("TLS client certificate file for proxy")
.value_name("CERTIFICATE FILE")
.long(PROXY_TLS_ARG_CERT)
.num_args(1)
.value_hint(ValueHint::FilePath)
.value_parser(value_parser!(PathBuf))
.requires(PROXY_TLS_ARG_KEY),
)
.arg(
Arg::new(PROXY_TLS_ARG_KEY)
.help("TLS client private key file for proxy")
.value_name("PRIVATE KEY FILE")
.long(PROXY_TLS_ARG_KEY)
.num_args(1)
.value_hint(ValueHint::FilePath)
.value_parser(value_parser!(PathBuf))
.requires(PROXY_TLS_ARG_CERT),
)
.arg(
Arg::new(PROXY_TLS_ARG_SESSION_CACHE)
.help("Set TLS session cache type for proxy")
.value_name("TYPE")
.long(PROXY_TLS_ARG_SESSION_CACHE)
.num_args(1)
.value_parser(SESSION_CACHE_VALUES),
)
.arg(
Arg::new(PROXY_TLS_ARG_NO_VERIFY)
.help("Skip TLS verify for proxy")
.action(ArgAction::SetTrue)
.long(PROXY_TLS_ARG_NO_VERIFY),
)
.arg(
Arg::new(PROXY_TLS_ARG_NO_SNI)
.help("Disable TLS SNI for proxy")
.action(ArgAction::SetTrue)
.long(PROXY_TLS_ARG_NO_SNI),
)
.arg(
Arg::new(PROXY_TLS_ARG_PROTOCOL)
.help("Set tls protocol for proxy")
.value_name("PROTOCOL")
.long(PROXY_TLS_ARG_PROTOCOL)
.value_parser(PROTOCOL_VALUES)
.num_args(1),
)
.arg(
Arg::new(PROXY_TLS_ARG_CIPHERS)
.help("Set tls ciphers for proxy")
.value_name("CIPHERS")
.long(PROXY_TLS_ARG_CIPHERS)
.num_args(1)
.requires(PROXY_TLS_ARG_PROTOCOL),
)
}

41
g3bench/src/worker.rs Normal file
View file

@ -0,0 +1,41 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use tokio::runtime::Handle;
use g3_runtime::unaided::{UnaidedRuntimeConfig, WorkersGuard};
static mut WORKER_HANDLERS: Vec<Handle> = Vec::new();
pub async fn spawn_workers(config: &UnaidedRuntimeConfig) -> anyhow::Result<WorkersGuard> {
let guard = config
.start(&|_, handle| unsafe { WORKER_HANDLERS.push(handle) })
.await?;
Ok(guard)
}
pub(super) fn select_handle(concurrency_index: usize) -> Option<Handle> {
unsafe {
match WORKER_HANDLERS.len() {
0 => None,
1 => Some(WORKER_HANDLERS[0].clone()),
n => {
let handle = WORKER_HANDLERS.get_unchecked(concurrency_index % n);
Some(handle.clone())
}
}
}
}

814
g3proxy/CHANGELOG Normal file
View file

@ -0,0 +1,814 @@
v1.7.10:
- Feature: support happy-eyeballs for resolve redirected domain
- Feature: allow to set resolve strategy at user-site level
- Optimization: enable tcp_nodelay by default if needed
v1.7.9:
- BUG FIX: fix the parse of weighted upstream address
- BUG FIX: fix the clean of offline servers
- Optimization: listen in each worker when listen_in_worker is enabled
- Feature: add new ctl command to force quit offline servers
v1.7.8:
- BUG FIX: fix the set of package version in deb package
- Feature: allow to set multiple cert pairs in rustls server config
- Feature: allow to listen in worker, and run tasks in unconstrained mode
- Feature: allow to start listen instance according to available parallelism
- Optimization: update the define of openssl tls client config
- Compatibility: add g3-compat to allow compile on platforms with glibc < 2.27
- Compatibility: use vendored-openssl on platforms with libssl < 1.1.1
v1.7.7:
- Feature: make libcurl as optional feature
- Feature: add more config options to openssl tls client
- Internal: move most of daemon control code to g3-daemon lib crate
v1.7.6:
- Feature: allow to config the max io events per tick value for tokio runtime
- BUG FIX: fix the print of package version
- Optimization: add yield size limit to http body transfer futures
v1.7.5:
- Optimization: use icap connection pool at auditor level instead of audit handle level
- Feature: ignore Via header generated by ICAP server when sending request to upstream
- BUG FIX: fix AsyncRead implementation of chunk decoder
v1.7.4:
- Feature: default to send client address and username to ICAP server
- Feature: allow to respond shared names back to ICAP server
- Feature: allow to set application audit ratio in auditor and user config
v1.7.3:
- Feature: allow to use icap_reqmod_service and icap_respmod_service in auditor
- BUG FIX: make sure upstream response header is sent out in case the upstream close it's body io
v1.7.2:
- Feature: allow to send client_ip in route_query escaper
- Optimization: various update to http parse code
v1.7.1:
- Feature: add --dot-graph command line option to draw internal dependency graph
- BUG FIX: fix command line handling
v1.7.0:
- Feature: allow to set username for redis cluster config in ProxyFloat escaper
- Feature: support custom config file extension
- Feature: support round robin select policy in various escaper and server
- Feature: add user_type tag to user and user site metrics
- Feature: replace http_tproxy and sni_proxy with a new protocol inspection enabled sni_proxy
- BUG FIX: fix spawn new reload of resolver
v1.6.0:
- Feature: forbid ipv6 discard-only address block by default
- Optimization: use less mutex in openssl tls client session cache
v1.5.6:
- BUG FIX: do not consider c-ares NODATA as error
- Optimization: ignore the first error in happy eyeballs resolver
v1.5.5:
- Feature: switch to use happy eyeballs resolve method in
- RouteResolved escaper
- udp connect method in DirectFixed escaper
- resolver query ctl interface
v1.5.4:
- Feature: allow to set/unset hostname in syslog message header
v1.5.3:
- Feature: use happy eyeballs algorithm in Direct* and Proxy* escaper
- Feature: enable ftp over http support in DirectFloat escaper
- Feature: support multiple upstream addresses on TcpStream and TlsStream server
v1.5.2:
- BUG FIX: fix panic when parsing ipv6 socks requests
- Optimization: socks: return error early for empty domains
- Feature: drop local_match in route_upstream escaper
v1.5.1:
- BUG FIX: fix the build of deb package
v1.5.0:
- Feature: reintroduce the python dynamic user source and make it optional
- Feature: keep ipv4 compatible address in ipv6 form
- Feature: allow to log to fluentd by using it's Forward Protocol
v1.4.2:
- Feature: allow to specify lua version via features, and default to lua5.1
- Feature: add g3proxy-lua to test the functionality of lua
- BUG FIX: fix auth error reply in http_rproxy server
v1.4.1:
- Feature: allow to set report script for lua dynamic user source
- BUG FIX: fix the exact domain match for explicit user sites
v1.4.0:
- Feature: add user level layer 7 alive connection metrics
- Feature: add tcp_conn_rate_limit to user config
- Optimization: rename tcp and udp speed limit config options
v1.3.5:
- BUG FIX: cache dynamic users only if valid
- BUG FIX: revert to use mlua 0.7.4
v1.3.4:
- Feature: allow to set site config for each user
- Optimization: close http persistent connections early when server goes offline
v1.3.3:
- Feature: use clap to parse command line options
- BUG FIX: fix set of resolve strategy for g3proxy-ctl resolver query command
v1.3.2:
- Feature: support traditional private key PEM format
- Feature: add compile info to g3proxy
- Optimization: rename http_gateway server to http_rproxy
v1.3.1:
- Feature: allow to use unaided worker threads for tasks
- BUG FIX: fix reload all config
v1.3.0:
- Feature: add socks_use_udp_associate option to user config
- Optimization: use buffered metrics sink
- Optimization: switch to use std Mutex instead of parking_lot Mutex
- Optimization: rename some resolver ttl config options
v1.2.2:
- Feature: update openssl tls client config
v1.2.1:
- BUG FIX: fix deb package dependency
v1.2.0:
- Feature: switch to curl for simple http requests and add more config options:
- connect_timeout
- interface
- Feature: use distro default luajit
v1.1.5:
- BUG FIX: fix debian package file
v1.1.4:
- Feature: add back the simple 'http' dynamic user source
- Feature: add trust-dns resolver, which can also be configured to use DoT or DoH
- Feature: switch to use openssl tls client for outgoing tls connections
- Feature: allow to disable sni and session cache in rustls client config
v1.1.3:
- Feature: add 'lua' source for dynamic user fetching
- Feature: remove 'python' source for dynamic user fetching
- Feature: add 'route_client' escaper
v1.1.2:
- Import all bug fixes from v1.0.1 and update packages
v1.1.1:
- Feature: add new http_tproxy server
- Feature: rename server ppdp_tcp_port to intelli_proxy
v1.1.0:
- Feature: add 'python' source for dynamic user fetching
- Feature: remove 'http' source for dynamic user fetching
v1.0.1:
- BUG FIX: fix handle of 100-continue response after request body sent out
- BUG FIX: do not close the http connection if no body is expected in response
v1.0.0:
- First Long Term Support Branch
v0.9.10:
- BUG FIX: fix rfc5424 syslog timestamp format
v0.9.9:
- Feature: rename escaper route_dst_ip to route_resolved
- BUG FIX: fix update of resolve strategy based on ipv4_only/ipv6_only settings
- BUG FIX: avoid the panic caused by parsing float values as time duration
v0.9.8:
- BUG FIX: add '=' as KV delimiter to rfc3164 syslog
v0.9.7:
- BUG FIX: fix parse of msgpack string
- BUG FIX: close remote tcp sockets in time in proxy_socks5 escaper
- BUG FIX: really set ca certificate when build tls client config
v0.9.6:
- Feature: allow to display verbose ftp command message in g3proxy-ftp
- Feature: allow to change timezone via control commands
- Feature: allow to generate varies shell completion scripts for g3proxy-ctl and g3proxy-ftp
v0.9.5:
- Feature: add tls_stream server
- Feature: check time offset at start time, and make the explicit use of local time thread safe
v0.9.4:
- BUG FIX: drop cmake build dependency to build on old OS
v0.9.3:
- Feature: add blake3 to fast hashed passphrase and make all hashes optional
- Feature: allow to set negotiation timeout value for next proxy peers
- Feature: allow to set handshake timeout value for servers with tls enabled, and add listen.timeout metrics
- Feature: drop tls code in plain_tcp_port and add plain_tls_port
- Feature: move ingress network filter check to very early stage, which results to:
- rename metrics server.forbidden.src_blocked to listen.dropped
- add ingress network filter config to plain_tcp_port / plain_tls_port / ppdp_tcp_port
v0.9.2:
- Feature: allow to add extra metrics tags to escaper metrics
- Feature: delete useless tcp_copy_flush_interval server config option
- Feature: add user level upstream traffic stats
- BUG FIX: allow to use route escaper behind http gateway server
v0.9.1:
- Feature: add sni_proxy server
v0.9.0:
- Feature: add jump hash as a pick policy for SelectiveVec
- Feature: remove deprecated escaper config options:
- tcp_connect_max_retry
- tcp_connect_each_timeout
- Feature: allow to use the first Authorization for upstream ftp auth in http proxy server
- Feature: add route_select escaper, and remove the old route_random escaper
- Feature: add route_query escaper
- Feature: allow to start tls at server level behind multiple plain tcp ports
- Feature: allow to set client side tcp socket options at user level
- Feature: use PKCS #8 format for private key
- Feature: delete append_forwarded_for config option from proxy_http(s) escaper
- Feature: delete remote_keepalive_eof_wait config option from http_proxy server
- Feature: add http_gateway server
v0.8.11:
- Feature: allow to set tcp and udp socket options at server side
v0.8.10:
- regenerate release tarball
v0.8.9:
- Feature: allow to set SO_MARK for tcp socket
- Feature: allow to set more udp socket options at user and escaper level:
- IP_TTL
- IP_TOS
- SO_MARK
v0.8.8:
- Feature: allow to set probe_interval and probe_count in tcp keepalive config
- Feature: allow to set more tcp socket options at user and escaper level:
- TCP_NODELAY
- TCP_MSS
- IP_TTL
- IP_TOS
v0.8.7:
- BUG FIX: fix resolve of dns name with '_' in it's CNAME
v0.8.6:
- Feature: add tcp_connect config option to user config
- Feature: add tcp_connect config option to escaper config, and deprecate the following:
- tcp_connect_max_retry
- tcp_connect_each_timeout
v0.8.5:
- Feature: add --version command line option
- Feature: add proxy_request_filter to user config
v0.8.4:
- Feature: allow to forward all ftp requests to next proxy
- Feature: enable https forward by default
v0.8.3:
- Feature: allow to add extra metrics tags in server and user metrics
- Feature: add server and server extra tags in user forbidden metrics
- Feature: add more detailed resolver error metrics
v0.8.2:
- Optimization: do eagerly flush in io copy
v0.8.1:
- Feature: allow pass userid to next proxy in proxy_http(s) escaper
- BUG FIX: fix leak of forwarded header to upstream in proxy_http(s) escaper
v0.8.0:
- Feature: support file upload and delete in ftp over http request
- Optimization: change default tcp copy flush interval to 200ms
- Optimization: explicit forbid empty upstream address
v0.7.27:
- Feature: support single range request in ftp over http request
- Feature: support tls server config in plain_tcp_port server
- Optimization: always ignore body related headers in 1xx and 204 http response as specified in rfc7230
v0.7.26:
- BUG FIX: fix panic in https_forward task if the upstream has no domain
- Feature: support tls offload in tcp stream
- Feature: set bind_address_no_port for udp connect socket
v0.7.25:
- BUG FIX: various fix for ftp over http
v0.7.24:
- Feature: support udp associate and udp connect on proxy_socks5 escaper
- Feature: restore support for domain as target address in udp associate task
- Feature: prefer to use mime type returned by ftp server
- Feature: do acl check in udp associate task
- Feature: force quit tasks during process shutdown
- BUG FIX: ftp: determine transfer size right after setting transfer type
v0.7.23:
- Feature: allow to set auto_reply_local_ip_map for socks_proxy server
- BUG FIX: fix limit for tcp copy config
v0.7.22:
- Feature: add default simplified udp connect mode for socks server
- Feature: do not require the same address family for tcp and udp if udp bind ip is set
- BUG FIX: fix subnet_match config in RouteUpstream escaper
v0.7.21:
- Feature: refactor task idle check logic:
- remove 'task_idle_duration' config at server side
- add 'task_idle_check_duration' config at server side
- add 'task_idle_max_count' at server and user side
- Feature: add src denied stats to server forbidden stats
- Feature: add subnet_match to dst_host_filter_set acl rule set
- Feature: add subnet_match rule to RouteUpstream escaper
- BUG FIX: quote the realm value in response header
v0.7.20:
- Feature: add explicit flush interval for tcp copy
- Feature: default to always try epsv for ftp transfer
- Optimization: increase default http rsp header recv timeout to 60s
v0.7.19:
- Feature: drop escaper tag from user traffic metrics
- Feature: initial version with working ftp over http support
v0.7.18:
- BUG FIX: fix panic when handle empty Host http header value
v0.7.17:
- Feature: allow to set http forward capability for proxy_http(s) escapers
We can forward https and ftp requests to next http(s) proxies from now on
- Feature: add route metrics for route type escapers
- Feature: the request and traffic metrics are now correct set on the final escaper
- Feature: add g3proxy-ftp to test ftp functions
v0.7.16:
- BUG FIX: fix upstream addr parse error
- BUG FIX: fix set of `allow_custom_host` and `steal_forwarded_for` options for http_proxy server
- Feature: allow to set udp socket buffer size for socks_proxy server
v0.7.15:
- BUG FIX: fix miss action for ip hosts when only child and regex host rules set
- Feature: add options to control http forwarded headers
- http_proxy server: allow to delete forwarded headers in client requests
- proxy_http & proxy_https escaper: allow to append forwarded header in requests send out
- Feature: support haproxy PROXY protocol for proxy_http and proxy_https escapers
- Feature: support CEE log syntax in syslog
- Optimization: reply with http code 409 if host header doesn't match host in uri
v0.7.14:
- BUG FIX: support ipv6 address in squared bracket as http Host value
- BUG FIX: convert ipv6 mapped ipv4 address back to ipv4 address when parsing UpstreamAddr
- BUG FIX: fix server online status after reloading runtime
- Optimization: do not create default escaper in rpc commands
- Feature: add more servers
- plain_tcp_port: just listen to a tcp port and send connections to another server
- ppdp_tcp_port: detect the proxy protocol of the connection, and send to the corresponding next server,
the type of which could be either http_proxy or socks_proxy.
- dummy_close: just close the connection
v0.7.13:
- BUG FIX: fix handle of http url with ipv6 address as host field
- Feature: add listen stats for server
- Optimization: make `append_report_ts` syslog driver config option default to false
v0.7.12:
- BUG FIX: fix rfc5424 format for appended report_ts log field
v0.7.11:
- Feature: add udp_bind_port_range config option to socks_proxy server
- Feature: default to append `report_ts` to logs sendto syslogd
- add `append_report_ts` config option to syslog driver to control the behaviour
- Optimization: ignore optional fields with empty value in logs send to syslogd
v0.7.10:
- BUG FIX: fix counting of user level https forward io stats
- BUG FIX: fix X-BD-Upstream-Addr custom header
v0.7.9:
- Feature: http_proxy: close the connection if previous request is also auth failed
v0.7.8:
- Feature: use native async implementation from g3-syslog
- Feature: add metrics for loggers
- add logger.message.total
- add logger.message.pass
- add logger.traffic.pass
- add logger.message.drop
- Feature: sleep double emit_metrics interval time for metrics flushing in graceful shutdown mode
- Feature: add more resolver runtime config options
- graceful_stop_wait, which set the delay time after really stop the thread
- protective_query_timeout, which set the query timeout for queries sent to driver
- BUG FIX: fix http_proxy server config key name no_early_error_reply
- BUG FIX: shutdown the runtime thread for fail-over resolver
v0.7.7:
- Feature: change the default found action for user-agent acl rule to forbid.
- Feature: make some restrictions on uri in log:
- limit the number of characters, and add corresponding config options
- replace password field with 'xyz'
- Feature: add `user_agent` to HttpForward Task log
- Feature: add stats about resolver internal hashtable memory usage
- Optimization: increase the default async log channel size from 1024 to 4096
v0.7.6:
- Feature: allow to drain body of http requests with no auth info
- add `untrusted_read_limit` option to http_proxy to enable it and set the read limit
- Feature: add user_blocked forbidden stats to server
- Feature: add untrusted task stats to server
- add server.task.untrusted_total
- add server.task.untrusted_alive
- add server.traffic.untrusted_in.bytes
v0.7.5:
- BUG FIX: limit the maximum dns cache ttl value to avoid panic
- Feature: add config option *max_cache_ttl* to resolvers
v0.7.4:
- BUG FIX: fix selection of udp bind ipv6 address
v0.7.3:
- BUG FIX: convert ipv4-mapped ip back to ipv4 format early
- Optimization: add content-type to http proxy error response
v0.7.2:
- Feature: add new no_early_error_reply config option to http_proxy server
- Feature: add capnp rpc command to list user group and users
- Optimization: do not add user level acl stats to server level
- Optimization: add more detailed reason to task logs
v0.7.1:
- Optimization: do more strict limitation on user max alive requests
- BUG FIX: http_proxy server: fix keepalive for http 407 response
- Feature: add layer-7 http User-Agent acl rule to user config
- Feature: add ua_blocked forbidden stats for user
v0.7.0:
- FEATURE: add fail_over resolver
v0.6.9:
- FEATURE: forbid unspecified egress target address by default
- FEATURE: allow to set bind ip addresses for socks5 udp associate client side ip selection
v0.6.8:
- BUG FIX: update tokio 1.1.1 memory leak fixed version
v0.6.7:
- FEATURE: add resolve redirection support at user and escaper level
- FEATURE: add alive requests stats at user level
- FEATURE: allow to limit total alive requests at user level
- FEATURE: also cancel tasks belong to blocked users in idle detection
- FEATURE: socks5 udp associate: dropped domain support for security reasons
- FEATURE: add child match rules to RouteUpstream escaper
- FEATURE: make sure cached data write flushed when the other end read closed in tcp connect tasks
- BUG FIX: do correct parent domain match in child match acl rule
v0.6.6:
- BUG FIX: add cached data in buf reader to io stats
- FEATURE: allow to set log rate limit at user level
- FEATURE: add stats about log skipped requests at user level
v0.6.5:
- BUG FIX: fix log_type for shared loggers
- FEATURE: make socks5 udp associate optional and disabled by default
v0.6.4:
- BUG FIX: fix check of body type for http 304 response
- FEATURE: add escaper level forbidden stats
- FEATURE: add server & escaper level forbidden stats to user forbidden stats when possible
v0.6.3:
- BUG FIX: fix user-group reload
- BUG FIX: fix normalization for socks_proxy config keys
v0.6.2:
- BUG FIX: do not exit after accept error
- Feature: allow to discard task / escaper / resolver logs, and make this the default
- Feature: allow to set socket buffer size for c-ares resolver
- Feature: allow to use shared logger thread for server and escaper
v0.6.1:
- BUG FIX: fix idle check
v0.6.0:
- Internal: port to tokio 1.0 version
- BUG FIX: only spawn long running tasks in main runtime
v0.5.10
- BUG FIX: fix index based path selection when index overflow
- BUG FIX: fix emit of user and server forbidden stats
v0.5.9
- Feature: add new TrickFloat escaper
- Feature: add new RouteMapping escaper
- Feature: add path selection to:
- HttpProxy server, disabled by default
- DirectFixed escaper, disabled by default
- RouteMapping escaper, always enabled, but only support index mapping
- Feature: add general http keepalive config:
- rename keepalive_eof_wait to remote_keepalive_eof_wait for HttpProxy server
- add http_forward_upstream_keepalive to HttpProxy server, remove keepalive_idle_expire
- add http_upstream_keepalive to user config, remove http_keepalive_idle
- rename tcp_keepalive to tcp_remote_keepalive for user
v0.5.8:
- Feature: add more options to control http keepalive:
- keepalive_eof_wait: set the time to wait when check eof of upstream connection
- keepalive_idle_expire: set the max idle time for the keep of upstream connection
- Feature: add http_keepalive_idle config to user config.
v0.5.7:
- Feature: allow user to specify custom resolve strategy
- Feature: add 525 reply code to http proxy
- Feature: add -t flag to g3proxy to test the format of config file
- BUG FIX: also check upstream read close while sending new requests on reused connection
- Feature: only wait for 100-continue response before request body send out
- Feature: add tcp_keepalive setting to user config
- Feature: add tcp_keepalive setting to escaper config, and deprecate old tcp_keepalive_idle config
- Feature: change default resolve pick strategy to Random instead of First.
v0.5.6
- Feature: allow to block user and set a delay before sending auth error response
- Feature: add user and server level forbidden stats
- BUG FIX: fix http forward Connection check
v0.5.5:
- Optimization: use native tls certs for local generated http request
- Feature: allow to auth user with traditional unix crypt
- Feature: allow to set source of proxy_float escaper to passive
v0.5.4:
- BUG FIX: fix user http_forward io stats
- BUG FIX: fix escaper http forward task count
v0.5.3
- BUG FIX: fix default stats emit duration
- BUG FIX: fix emit of user stats
v0.5.2
- Feature: add egress info to direct_float escaper
v0.5.1
- Feature: add resolver stats
- Optimization: allow more ascii chars in domain
- Optimization: add server & escaper tags to user stats
v0.5.0:
- Feature: add 'allow_custom_host' to http_proxy server
- Feature: support output of server / escaper / user stats
- added 'stat' root config
- support output to statsd
v0.4.23:
- Optimization: g3proxy-clt can detect default runtime dir now
- Optimization: default to create non-existed cache file
- Optimization: setup process logger early, so warning in config parse code can be emitted
- Optimization: resolver pick policy now apply to get_all_addrs
- Optimization: add more tcp_connect info to escape and task log:
- tcp_connect_tries: show how many times we have tried to connect
- tcp_connect_spend: show the total time we have spent on tcp connect for all tries
v0.4.22:
- Feature: rename proxy_dynamic escaper to proxy_float, and add options to set local cache
- Feature: add local cache for dynamic users
- Feature: allow to publish peers to proxy_float escaper
- Feature: add direct_float escaper
v0.4.21:
- Feature: add yield out to tcp copy and udp relay task
- Feature: add the following config to server:
- tcp_copy_yield_size
- udp_relay_packet_size
- udp_relay_yield_size
- Feature: support capnproto rpc on local controller, and add g3proxy-ctl command
v0.4.20:
- Optimization: allow to set protective_cache_ttl for error / empty resolver records
- Optimization: add 'duration' and 'source' to c-ares resolver error log
v0.4.19:
- BUG FIX: always return all resolver error for all queries.
This fix the regression introduced in v0.4.18
v0.4.18:
- Optimization: report misc server error in cares resolver
- Optimization: log query type in cares resolver error log
- Optimization: return early when resolve error for *First strategies
- BUG FIX: fix the number of running listen instances during reload of server
v0.4.17:
- Feature: cares resolver: allow to set bind ip for each family:
- deprecate 'bind' config option
- add 'bind_ipv4' config option
- add 'bind_ipv6' config option
- Feature: proxy escapers: allow to set bind ip for each family:
- deprecate 'bind_ip' config option
- add 'bind_ipv4' config option
- add 'bind_ipv6' config option
v0.4.16:
- Feature: add expire to user config.
- Feature: allow to builtin webpki-roots ca certs for rustls client config.
- Feature: add dynamic users to user group, the source currently supported are:
- file: sync from a local file
- http: sync through an http GET request
v0.4.15:
- Feature: add more acl rule to server and user config:
- dst_host_filter_set: limit the upstream host
- dst_port_filter: limit the upstream port
- Feature: add 'wait_time' to task log:
- wait_time is the time after we recv the first byte and before create the task
- ready_time and total_time doesn't include wait_time
- Feature: add tls handshake in escape log.
- Optimization: allow to set a list of tls certificate file.
- BUG FIX: fix reload of server if tls / acl config changed.
v0.4.14:
- Feature: support https forward on all escapers.
- Feature: add ProxyHttps escaper.
- Feature: support https proxy peer on ProxyFloat escaper.
- Optimization: add options to set internal copy buffer size.
- BUG FIX: fix domain prefix match in route-upstream escaper.
v0.4.13:
- Optimization: add more fields such like io stats to task log
- BUG FIX: fix handle of response to http HEAD request
v0.4.12:
- Feature: add log config in main conf, which sets initial config for loggers
- Feature: allow to send log to syslogd through unix and udp sockets
- Optimization: move tcp_connect and udp_relay log to a new escape logger
v0.4.11:
- Feature: enable request recv timeout check on http proxy server
- Optimization: use separate resolve logger for each resolver
- Optimization: limit client address at socket level for udp client sockets
- Optimization: use more thread local buffer
v0.4.10:
- Feature: enable keepalive by default on dynamic escapers
- Feature: enable task idle check on servers
- BUG FIX: do strict check on limit read
v0.4.9:
- Feature: add instance count config field to server listen config
- Feature: add 0x09 as connection timed out socks5 reply code, as it's added in socks6 draft
- Feature: reflect peer timeout in response to client for proxy escapers
- use 504 for http server response
- use 0x09 for socks5 reply
- Feature: support ingress_network_filter for servers
- Feature: support egress_network_filter in direct fixed escaper
- Feature: add response header X-BD-Dynamic-Egress-Info for dynamic escapers, it will be set
if server_id in config is set.
- Feature: let socks5 dynamic peer return early if expired when sending request on an alive connection
- Optimization: use different task log threads for each server
- Optimization: increase the default backlog value to 4096
- Optimization: always use socket address in listen config, drop separate port config
- BUG FIX: use real expire time in http response
- BUG FIX: make sure close the remote connection if http forward task should close
v0.4.8:
- BUG FIX: fix format of http response header Proxy-Authenticate
v0.4.7:
- Optimization: use askama instead of handlebars to generate error html page
- Optimization: support systemd version 23x and python version 3.5.x
- Optimization: switch expire_guard_seconds option to expire_guard_duration for proxy_float escaper
- Optimization: rename main conf key for auth to 'user_group'
v0.4.6:
- BUG FIX: fix http CONNECT 200 response when any custom header enabled
v0.4.5:
- Optimization: do not count in target port in rendezvous selection for proxy escapers.
- Optimization: adjust custom headers and settings for http_proxy server:
- add header X-BD-Remote-Connection-Info, which will be set if server_id in config is set.
- remove header X-BD-Remote-Connection-Expire, as it contains in X-BD-Remote-Connection-Info.
- remove option http_forward_upstream_id, add option http_forward_mark_upstream instead,
which requires server_id to be set. The value for header X-BD-Upstream-Id will be server_id.
- Optimization: change some fields in tcp connect logs:
- add "next-bind-ip" to record the bind ip we selected before the connection.
- rename "tcp-expire" to "next-expire", this is the peer expire time, not only the connection.
- rename "next-bind" to "next-bound-addr", this is the local addr from which we connect to remote.
- rename "next-peer" to "next-peer-addr", which is the remote socket address.
- Optimization: use parking_lot::Mutex for short non-async operations.
- BUG FIX: fix peer update for proxy_float escaper.
- BUG FIX: use only ICANN domains in psl data file.
v0.4.4:
- Feature: support non-blocking redis-cluster dynamic peer update
- Feature: introduce selective vector and use it in proxy escapers
The nodes can be weighted, and we support random/sequence/rendezvous pick policies
- Feature: support redis 6 AUTH with username
- Feature: add user stats, including connection/request/traffic stats
- Optimization: use ahash instead of std hash for better performance
v0.4.3:
- BUG FIX: resolver: fix empty records with Ipv4First policy if ipv6 resolver return empty first
v0.4.2:
- Feature: allow to set request limit at user level
v0.4.1:
- Feature: add user group reload action in daemon helper script
- Feature: allow to set rate limit at user level at the server side
- Feature: respect expire value in proxy_float escaper, the following options are added:
- expire_guard_seconds
This will set some buffer time between the time we make the selection and
the time we make the real connection
- Feature: allow http dynamic peer to append extra headers via "extra_append_headers"
- BUG FIX: fix handling of multiple http headers
v0.4.0:
- Feature: add proxy_float escaper
- Feature: add proxy_socks5 escaper
- Feature: add some custom response headers for http_proxy server
- X-BD-Upstream-Id
For http forward protocol. It means that the response comes from remote side
if this header is present, at least the remote side of the proxy which has
been set with the same 'upstream id' value.
- X-BD-Remote-Connection-Expire
May be present in all http responses. If the value is a valid rfc3339 datetime
string, the remote connection will expire after this time, and the pending data
may be failed to transfer. New requests should not be affected if the connection
to the proxy is keep-alive and clean. If there are multiple chained proxies on the
path, the nearest value from now will be kept.
- X-BD-Upstream-Addr
If enabled, it contains the upstream addr we attempted to connect to. If there are
multiple chained proxies on the path, the result from the nearest one to upstream
will be used. Note not all proxies support such info. It depends on the real
topology to decide whether it's value is meaningful.
- X-BD-Outgoing-IP
If enabled, it will contain the far most ip address we used to connect to upstream.
If there are multiple chained proxies on the path, the result from the nearest one
to upstream will be used. Note not all proxies support such info and the ip address
may still behind NAT. It depends on the real topology to decide whether it's value
is meaningful.
- Feature: allow to enable tls for http_proxy server
- BUG FIX: fix encoding of username and password when used in HTTP contexts,
now we can support all UTF-8 chars in username and password.
- BUG FIX: fix the meaning of various stats
- server stats: count in all data in proxy protocol layer to client, including negotiation
- escaper stats: count in all data in proxy protocol layer to upstream, including negotiation
- task stats: only count in real user data both to client and to upstream, excluding negotiation
- tls is considered as a layer between transport and application, which won't be count in
v0.3.5:
- BUG FIX: fix install of systemd unit file in deb package
v0.3.4:
- BUG FIX: fix building of deb package
v0.3.3
- Feature: allow to set multiple proxy addresses in proxy_http escaper
- Feature: use the official way to build deb packages
v0.3.2
- Feature: add json-rpc protocol to local controller
- Feature: add g3proxy-daemon-helper script for reload and offline actions
- Feature: add more tcp and http related config options
- BUG FIX: fix dead lock when reloading route type escapers
v0.3.1
- Feature: add basic auth to proxy_http escaper
- Feature: add local_match and radix_match rules to route_upstream escaper
- BUG FIX: make router in proxy_http escaper really optional
v0.3.0
- Feature: add sphinx doc for all configurations
- Feature: add error response body for http_proxy server
- Feature: add some 'route' type escapers
The 'route' escapers are used to select next escapers,
so now escapers can depend on others, but cycle is not allowed in the final dependency graph.
The following 'route' escapers are added:
- route_random
- route_upstream
- route_dst_ip
- Feature: add script to generate release tarball
- Tweak: rename not_existed escaper to dummy_deny
- Tweak: log optimization
v0.2.2
- Feature: make systemd service restart graceful, though not perfect
- Feature: add proxy_http escaper
v0.2.1
- Optimization: use buffer writer when sending response to client
- BUG FIX: close connection if remote response is read to end
v0.2.0
- Initial release with a CHANGELOG.

103
g3proxy/Cargo.toml Normal file
View file

@ -0,0 +1,103 @@
[package]
name = "g3proxy"
version = "1.7.10"
edition = "2021"
rust-version = "1.66"
description = "G3 generic proxy"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
anyhow = "1.0"
thiserror = "1.0"
async-trait = "0.1"
async-recursion = "1.0"
clap = "4.0"
clap_complete = "4.0"
yaml-rust = "0.4"
once_cell = "1.7"
futures-util = "0.3"
nix = { version = "0.26", default-features = false }
rand = "0.8"
tokio = { version = "1.24", features = ["rt-multi-thread", "rt", "signal", "sync", "time", "io-util", "net", "fs"] }
tokio-util = { version = "0.7", features = ["time"] }
tokio-rustls = "0.23.1"
rustls = "0.20"
tokio-openssl = "0.6"
openssl = "0.10"
indexmap = "1.6"
bytes = "1.0"
chrono = { version = "0.4.22", default-features = false, features = ["clock"] }
uuid = { version = "1.2", features = ["v1", "v4"] }
log = { version = "0.4", features = ["max_level_trace", "release_max_level_info"] }
slog = { version = "2", features = ["nested-values", "max_level_trace", "release_max_level_info"] }
percent-encoding = "2.1"
url = "2.1"
http = "0.2.9"
h2 = "0.3.15"
mime = "0.3"
askama = "0.12"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
ip_network = "0.4"
ip_network_table = "0.2"
radix_trie = "0.2.0"
base64 = "0.21"
pin-project = "1.0"
memchr = "2.4"
arc-swap = "1.2"
capnp-rpc = "0.16"
capnp = "0.16"
itoa = "1.0"
redis = { version = "0.22", default-features = false, features = ["cluster"] }
ascii = "1.0"
ahash = "0.8"
fxhash = "0.2"
governor = { version = "0.5", default-features = false, features = ["std", "jitter"] }
cadence = { package = "cadence-with-flush", version = "0.29" }
rmpv = "1.0"
mlua = { version = "0.8.1", features = ["send"], optional = true }
pyo3 = { version = "0.18", features = ["auto-initialize"], optional = true }
curl = { version = "0.4", features = ["http2"], optional = true }
g3-compat = { path = "../lib/g3-compat" }
g3-types = { path = "../lib/g3-types", features = ["auth-crypt", "rustls", "openssl", "acl-rule", "http", "route", "async-log"] }
g3-socket = { path = "../lib/g3-socket" }
g3-daemon = { path = "../lib/g3-daemon" }
g3-signal = { path = "../lib/g3-signal" }
g3-datetime = { path = "../lib/g3-datetime" }
g3-syslog = { path = "../lib/g3-syslog" }
g3-journal = { path = "../lib/g3-journal" }
g3-fluentd = { path = "../lib/g3-fluentd" }
g3-statsd = { path = "../lib/g3-statsd" }
g3-yaml = { path = "../lib/g3-yaml", features = ["resolve", "rustls", "openssl", "acl-rule", "http", "ftp-client", "proxy", "route", "dpi", "icap"] }
g3-json = { path = "../lib/g3-json", features = ["acl-rule", "resolve", "http", "rustls", "openssl", "proxy"] }
g3-msgpack = { path = "../lib/g3-msgpack" }
g3-io-ext = { path = "../lib/g3-io-ext" }
g3-resolver = { path = "../lib/g3-resolver", features = ["trust-dns"] }
g3-xcrypt = { path = "../lib/g3-xcrypt" }
g3-ftp-client = { path = "../lib/g3-ftp-client" }
g3-http = { path = "../lib/g3-http" }
g3-h2 = { path = "../lib/g3-h2" }
g3-socks = { path = "../lib/g3-socks" }
g3-dpi = { path = "../lib/g3-dpi" }
g3-tls-cert = { path = "../lib/g3-tls-cert" }
g3-icap-client = { path = "../lib/g3-icap-client" }
g3proxy-proto = { path = "proto" }
[dev-dependencies]
tokio = { version = "1.0", features = ["macros", "io-util"] }
tokio-util = { version = "0.7", features = ["io"] }
[build-dependencies]
rustc_version = "0.4"
[features]
default = ["lua54", "python", "c-ares", "curl"]
lua = ["mlua"]
luajit = ["lua", "mlua/luajit"]
lua51 = ["lua", "mlua/lua51"]
lua53 = ["lua", "mlua/lua53"]
lua54 = ["lua", "mlua/lua54"]
python = ["pyo3"]
c-ares = ["g3-resolver/c-ares"]
curl = ["dep:curl"]
vendored-openssl = ["openssl/vendored"]

195
g3proxy/README.md Normal file
View file

@ -0,0 +1,195 @@
# g3proxy
The g3proxy is an enterprise level forward proxy, but still with basic support for
tcp streaming / tls streaming / transparent proxy / reverse proxy.
## Features
### Server
#### General
* Ingress network filter / Target Host filter / Target Port filter
* Socket Speed Limit / Request Rate Limit / IDLE Check
* Protocol Inspection / TLS Interception / ICAP Adaptation
* Various TCP / UDP socket config options
#### Forward Proxy
- Http(s) Proxy
* TLS / mTLS
* Http Forward / Https Forward / Http CONNECT / Ftp over HTTP
* Basic User Authentication
* Port Hiding
- Socks Proxy
* Socks4 Tcp Connect / Socks5 Tcp Connect / Socks5 UDP Associate
* User Authentication
* Client side UDP IP Binding / IP Map / Ranged Port
#### Transparent Proxy
- SNI Proxy
* Multiple Protocol: TLS SNI extension / HTTP Host Header
* Host Redirection / Host ACL
#### Reverse Proxy
- Http(s) Reverse Proxy
* TLS / mTLS
* Basic User Authentication
* Port Hiding
* Host based Routing
* Path based Routing
#### Streaming
- TCP Stream
* Upstream TLS / mTLS
* Load Balance: RR / Random / Rendezvous / Jump Hash
- TLS Stream
* mTLS
* Upstream TLS / mTLS
* Load Balance: RR / Random / Rendezvous / Jump Hash
#### Alias Port
- TCP Port
- TLS Port
* mTLS
- Intelli Proxy
* Multiple protocol: Http Proxy / Socks Proxy
### Escaper
#### General
* Happy Eyeballs
* Socket Speed Limit
* Various TCP / UDP socket config options
* IP Bind
#### Direct Connect
- Fixed
* TCP Connect / TLS Connect / HTTP(s) Forward / UDP Associate
* Egress network filter
* Resolve redirection
- Float
* TCP Connect / TLS Connect / HTTP(s) Forward
* Egress network filter
* Resolve redirection
* Dynamic IP Bind
#### Proxy Chaining
- Http Proxy
* TCP Connect / TLS Connect / HTTP(s) Forward
* PROXY Protocol
* Load Balance: RR / Random / Rendezvous / Jump Hash
* Basic User Authentication
- Https Proxy
* TCP Connect / TLS Connect / HTTP(s) Forward
* PROXY Protocol
* Load Balance: RR / Random / Rendezvous / Jump Hash
* Basic User Authentication
* mTLS
- Socks5 Proxy
* TCP Connect / TLS Connect / HTTP(s) Forward / UDP Associate
* Load Balance: RR / Random / Rendezvous / Jump Hash
* Basic User Authentication
- Float
* Dynamic Proxy: Http Proxy / Https Proxy / Socks5 Proxy (no UDP)
#### Router
- route-client - based on client addresses
* exact ip match
* subnet match
- route-mapping - based on user supplied rules in requests
- route-query - based on queries to external agent
- route-resolved - based on resolved IP of target host
- route-select - simple load balancer
* RR / Random / Rendezvous / Jump Hash
- route-upstream - based on original target host
* exact ip match
* exact domain match
* wildcard domain match
* subnet match
* regex domain match
### Resolver
- c-ares
* UDP
* TCP
- trust-dns
* UDP / TCP
* DNS over TLS
* DNS over HTTPS
- fail-over
### Auth
#### User Authentication and Authorization
- ACL: Proxy Request / Target Host / Target Port / User Agent
- Socket Speed Limit / Request Rate Limit / Request Alive Limit / IDLE Check
- Auto Expire / Block
- Explicit Site Config
* match by exact ip / exact domain / wildcard domain / subnet
### Audit
- TCP Protocol Inspection
- TLS Interception
- Http / H2 Interception / ICAP Adaptation / Sampling
### Logging
- Log Types
* Server: task log
* Escaper: escape error log
* Resolver: resolve error log
* Audit: inspect / intercept log
- Backend: journald / syslog / fluentd
### Metrics
- Metrics Types
* Server level metrics
* Escaper level metrics
* User level metrics
* User-Site level metrics
- Backend: statsd, so we can support multiple backends via statsd implementations
## Documents
The detailed docs are resided in the *doc* directory.
You need to [install sphinx](https://www.sphinx-doc.org/en/master/usage/installation.html) to build html docs.
## Examples
See [examples](examples/README.md).

75
g3proxy/build.rs Normal file
View file

@ -0,0 +1,75 @@
/*
* Copyright 2023 ByteDance and/or its affiliates.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
use std::env;
fn main() {
let rustc = rustc_version::version_meta().unwrap();
println!(
"cargo:rustc-env=G3_BUILD_RUSTC_VERSION={}",
rustc.short_version_string
);
println!("cargo:rustc-env=G3_BUILD_RUSTC_CHANNEL={:?}", rustc.channel);
println!(
"cargo:rustc-env=G3_BUILD_HOST={}",
env::var("HOST").unwrap()
);
println!(
"cargo:rustc-env=G3_BUILD_TARGET={}",
env::var("TARGET").unwrap()
);
println!(
"cargo:rustc-env=G3_BUILD_PROFILE={}",
env::var("PROFILE").unwrap()
);
println!(
"cargo:rustc-env=G3_BUILD_OPT_LEVEL={}",
env::var("OPT_LEVEL").unwrap()
);
println!(
"cargo:rustc-env=G3_BUILD_DEBUG={}",
env::var("DEBUG").unwrap()
);
if let Ok(v) = env::var("G3_PACKAGE_VERSION") {
println!("cargo:rustc-env=G3_PACKAGE_VERSION={v}");
}
if env::var("CARGO_FEATURE_LUA").is_ok() {
if env::var("CARGO_FEATURE_LUA51").is_ok() {
println!("cargo:rustc-env=G3_LUA_FEATURE=lua51");
} else if env::var("CARGO_FEATURE_LUA53").is_ok() {
println!("cargo:rustc-env=G3_LUA_FEATURE=lua53");
} else if env::var("CARGO_FEATURE_LUA54").is_ok() {
println!("cargo:rustc-env=G3_LUA_FEATURE=lua54");
} else if env::var("CARGO_FEATURE_LUAJIT").is_ok() {
println!("cargo:rustc-env=G3_LUA_FEATURE=luajit");
}
}
if env::var("CARGO_FEATURE_PYTHON").is_ok() {
println!("cargo:rustc-env=G3_PYTHON_FEATURE=python");
}
if env::var("CARGO_FEATURE_C_ARES").is_ok() {
println!("cargo:rustc-env=G3_C_ARES_FEATURE=c-ares");
}
if env::var("CARGO_FEATURE_CURL").is_ok() {
println!("cargo:rustc-env=G3_CURL_FEATURE=curl");
}
}

View file

@ -0,0 +1,66 @@
package com.example.httpbin;
import java.io.File;
import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.ContentType;
import org.apache.http.entity.FileEntity;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
public class AuthNoCachePostFile {
static String proxyHost = "127.0.0.1";
static int proxyPort = 13128; // proxy port
static String proxyUser = "root";
static String proxyPassword = "toor";
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.out.println("File path not given");
System.exit(1);
}
HttpHost proxy = new HttpHost(proxyHost, proxyPort);
CredentialsProvider credsProvider = new BasicCredentialsProvider();
// set auth for proxy
credsProvider.setCredentials(
new AuthScope(proxyHost, proxyPort),
new UsernamePasswordCredentials(proxyUser, proxyPassword));
// set client level cred and proxy
CloseableHttpClient httpclient = HttpClients.custom()
.setDefaultCredentialsProvider(credsProvider)
.setProxy(proxy)
.build();
try {
HttpPost httppost = new HttpPost("http://httpbin.org/post");
File file = new File(args[0]);
FileEntity reqEntity = new FileEntity(file, ContentType.APPLICATION_OCTET_STREAM);
reqEntity.setChunked(true);
httppost.setEntity(reqEntity);
System.out.println("Executing request: " + httppost.getRequestLine());
// do not set any execution context with auth cache
CloseableHttpResponse response = httpclient.execute(httppost);
try {
System.out.println("----------------------------------------");
System.out.println(response.getStatusLine());
System.out.println(EntityUtils.toString(response.getEntity()));
} finally {
response.close();
}
} finally {
httpclient.close();
}
}
}

View file

@ -0,0 +1,21 @@
Java Apache HttpComponents Testcases
----
This directory contains the testcases written in Java, using
[Apache HttpComponents HttpClient 4.5](https://hc.apache.org/httpcomponents-client-4.5.x/index.html)
as the http client library.
### How to run
```shell
java -classpath /usr/share/java/httpclient.jar <filename>.jar
```
### Testcases
#### AuthNoCachePostFile
Reading a file and POST it's content to `http://httpbin.org/post`.
**PreemptiveBasicAuthentication** is not enabled, so we can use this testcase to
test the untrusted read functionality of the http proxy server.

View file

@ -0,0 +1,49 @@
package com.example.httpbin;
import java.io.File;
import okhttp3.MediaType;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import okhttp3.RequestBody;
import okhttp3.Response;
public class AuthPostFile {
static String proxyHost = "127.0.0.1";
static int proxyPort = 13128; // proxy port
static String proxyUser = "root";
static String proxyPassword = "toor";
static MediaType MEDIA_TYPE_OCTET_STREAM
= MediaType.get("application/octet-stream");
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.out.println("File path not given");
System.exit(1);
}
SimpleProxySelector proxyAddr = new SimpleProxySelector();
proxyAddr.SetProxy(proxyHost, proxyPort);
SimpleAuthenticator proxyAuth = new SimpleAuthenticator();
proxyAuth.SetAuth(proxyUser, proxyPassword);
OkHttpClient client = new OkHttpClient.Builder()
.proxySelector(proxyAddr)
.proxyAuthenticator(proxyAuth)
.build();
File file = new File(args[0]);
Request request = new Request.Builder()
.url("http://httpbin.org/post")
.post(RequestBody.create(MEDIA_TYPE_OCTET_STREAM, file))
.build();
try (Response response = client.newCall(request).execute()) {
System.out.println("----------------------------------------");
System.out.println("Status Code: " + response.code());
System.out.println(response.body().string());
}
}
}

View file

@ -0,0 +1,27 @@
Java OkHttp Testcases
----
This directory contains the testcases written in Java, using
[OkHttp 3.x](https://square.github.io/okhttp/)
as the http client library.
### How to run
```shell
# compile
javac -cp /usr/share/java/okhttp.jar -d ./build *java
# compress to jar, so it can be copied anywhere
cd build
jar cvf httpbin.jar com
# run
java -cp /usr/share/java/okhttp.jar:httpbin.jar com.example.httpbin.<classname> <params>
```
### Testcases
#### AuthPostFile
Reading a file and POST it's content to `http://httpbin.org/post`.
**PreemptiveBasicAuthentication** is not enabled, so we can use this testcase to
test the untrusted read functionality of the http proxy server.

View file

@ -0,0 +1,44 @@
package com.example.httpbin;
import java.io.IOException;
import okhttp3.Authenticator;
import okhttp3.Credentials;
import okhttp3.Challenge;
import okhttp3.Route;
import okhttp3.Response;
import okhttp3.Request;
class SimpleAuthenticator implements Authenticator {
String proxyAuth;
public void SetAuth(String username, String password) {
proxyAuth = Credentials.basic(username, password);
}
public Request authenticate(Route route, Response response) throws IOException {
if (response.request().header("Proxy-Authorization") != null) {
return null; // Give up, we've already failed to authenticate.
}
// the username and password can be selected by
for (Challenge challenge : response.challenges()) {
// If this is preemptive auth, use a preemptive credential.
if (challenge.scheme().equalsIgnoreCase("OkHttp-Preemptive")) {
// only for CONNECT, before sending request
return response.request().newBuilder()
.header("Proxy-Authorization", proxyAuth)
.build();
} else if (challenge.scheme().equalsIgnoreCase("Basic")) {
// after recv 407 for the first non-auth request
// no way to use preemptive auth for http forward at least at version 3.13
// users may add the Proxy-Authorization header to their requests directly
return response.request().newBuilder()
.header("Proxy-Authorization", proxyAuth)
.build();
}
}
return null; // no supported auth scheme
}
}

View file

@ -0,0 +1,36 @@
package com.example.httpbin;
import java.util.List;
import java.util.ArrayList;
import java.io.IOException;
import java.net.URI;
import java.net.Proxy;
import java.net.ProxySelector;
import java.net.SocketAddress;
import java.net.InetSocketAddress;
class SimpleProxySelector extends ProxySelector {
private Proxy proxy;
public SimpleProxySelector() {
super();
}
public void SetProxy(String host, int port) {
InetSocketAddress sa = new InetSocketAddress(host, port);
proxy = new Proxy(Proxy.Type.HTTP, sa);
}
public final List<Proxy> select(URI uri) {
// users may add code to select proxy based on uri here
System.out.println("select proxy");
List<Proxy> proxyList = new ArrayList<>();
proxyList.add(proxy);
return proxyList;
}
public final void connectFailed(URI uri, SocketAddress sa, IOException ioe) {
System.out.println("connect failed");
// users may handle error here
}
}

View file

@ -0,0 +1,72 @@
#!/usr/bin/env python3
import argparse
import sys
import unittest
import requests
from requests.auth import HTTPBasicAuth
target_proxy = ''
target_site = 'http://httpbin.org'
server_ca_cert = None
class TestHttpBin(unittest.TestCase):
def setUp(self):
self.session = requests.Session()
self.session.proxies.update({'http': target_proxy, 'https': target_proxy})
self.session.headers.update({'accept': 'application/json'})
self.session.verify = server_ca_cert
def tearDown(self):
self.session.close()
def test_simple_get(self):
r = self.session.get(f"{target_site}/get")
self.assertEqual(r.status_code, 200)
def test_basic_auth_get(self):
r = self.session.get(f"{target_site}/basic-auth/name/pass")
self.assertEqual(r.status_code, 401)
r = self.session.get(f"{target_site}/basic-auth/name/pass", auth=HTTPBasicAuth('name', 'pass'))
self.assertEqual(r.status_code, 200)
r = self.session.get(f"{target_site}/basic-auth/name/pass", auth=HTTPBasicAuth('name', 'pas'))
self.assertEqual(r.status_code, 401)
def test_base64_decode(self):
self.session.headers.update({'accept': 'text/html'})
r = self.session.get(f"{target_site}/base64/SFRUUEJJTiBpcyBhd2Vzb21l")
self.assertEqual(r.status_code, 200)
self.assertEqual(r.text, "HTTPBIN is awesome")
def test_post_continue(self):
data = "Content to post"
r = self.session.post(f"{target_site}/post", data=data)
self.assertEqual(r.status_code, 200)
r = self.session.post(f"{target_site}/post", data=data, headers={"Expect": "100-continue"})
self.assertEqual(r.status_code, 200)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--proxy', '-x', nargs='?', help='Proxy URL')
parser.add_argument('--site', '-T', nargs='?', help='Target Site', default=target_site)
parser.add_argument('--ca-cert', nargs='?', help='CA Cert')
(args, left_args) = parser.parse_known_args()
if args.proxy is not None:
target_proxy = args.proxy
if args.ca_cert is not None:
server_ca_cert = args.ca_cert
target_site = args.site
left_args.insert(0, sys.argv[0])
unittest.main(argv=left_args)

View file

@ -0,0 +1,50 @@
#include "vars.pg"
Content SimpleContent = {
size = exp(13KB); // response sizes distributed exponentially
cachable = 0%; // disable check of cache
};
// a primitive server cleverly labeled "S101"
// normally, you would specify more properties,
// but we will mostly rely on defaults for now
Server S = {
kind = "S101";
addresses = server_address; // where to create these server agents
contents = [ SimpleContent ];
direct_access = contents;
req_body_allowed = 70%; // affects "Expect: 100-continue" requests
};
// a primitive robot
Robot R = {
kind = "R101";
origins = S.addresses; // where the origin servers are
addresses = client_address; // where these robot agents will be created
http_proxies = proxy_address;
req_methods = ["POST", "PUT": 10%];
req_body_pause_prob = 50%; // add "Expect: 100-continue" header
post_contents = [ SimpleContent ];
put_contents = post_contents;
open_conn_lmt = 4;
};
Phase P1 = {
name = "expect_post";
goal.xactions = 100;
goal.errors = 1;
};
schedule(P1);
// commit to using these servers and robots
use(S, R);

View file

@ -0,0 +1,44 @@
---
runtime:
thread_number: 2
controller:
local:
recv_timeout: 30
send_timeout: 1
server:
- name: server_direct
escaper: direct
type: http_proxy
listen:
address: "[::]:10087"
conn_limit: 100M
- name: server_http
escaper: http
type: http_proxy
listen:
address: "[::]:10086"
resolver:
- name: default
type: DenyAll
escaper:
- name: direct
type: direct_fixed
no_ipv6: true
resolver: default
resolve_strategy: IPv4Only
tcp_conn_limit: 80M
udp_relay_limit: 10M
egress_network_filter:
default: allow
allow: 127.0.0.1
- name: http
type: proxy_http
proxy_addr: 127.0.0.1:10087
no_ipv6: true
resolver: default
resolve_strategy: IPv4Only
tcp_conn_limit: 80M

View file

@ -0,0 +1,61 @@
#include "vars.pg"
Content cntImage = {
kind = "Image";
mime = { type = "image/jpg"; extensions = [ ".jpg" ]; };
size = exp(100KB);
cachable = 0%;
};
Content cntHTML = {
kind = "HTML";
mime = { type = "text/html"; extensions = [ ".html" : 60%, ".htm" ]; };
size = exp(8.5KB);
cachable = 0%;
may_contain = [ cntImage ];
embedded_obj_cnt = zipf(13);
};
// a primitive server cleverly labeled "S101"
// normally, you would specify more properties,
// but we will mostly rely on defaults for now
Server S = {
kind = "S101";
addresses = server_address; // where to create these server agents
contents = [ cntHTML, cntImage ];
direct_access = contents;
pconn_use_lmt = const(10);
};
// a primitive robot
Robot R = {
kind = "R101";
origins = S.addresses; // where the origin servers are
addresses = client_address; // where these robot agents will be created
http_proxies = proxy_address;
req_methods = ["GET", "HEAD": 10%];
embed_recur = 100%;
pconn_use_lmt = const(10);
open_conn_lmt = 4;
};
Phase P1 = {
name = "keepalive_get";
goal.xactions = 100;
goal.errors = 1;
};
schedule(P1);
// commit to using these servers and robots
use(S, R);

View file

@ -0,0 +1,45 @@
#include "vars.pg"
Content SimpleContent = {
size = exp(13KB); // response sizes distributed exponentially
cachable = 0%; // disable check of cache
};
// a primitive server cleverly labeled "S101"
// normally, you would specify more properties,
// but we will mostly rely on defaults for now
Server S = {
kind = "S101";
addresses = server_address; // where to create these server agents
contents = [ SimpleContent ];
direct_access = contents;
};
// a primitive robot
Robot R = {
kind = "R101";
origins = S.addresses; // where the origin servers are
addresses = client_address; // where these robot agents will be created
http_proxies = proxy_address;
req_methods = ["GET", "HEAD": 10%];
open_conn_lmt = 4;
};
Phase P1 = {
name = "simple_get";
goal.xactions = 100;
goal.errors = 1;
};
schedule(P1);
// commit to using these servers and robots
use(S, R);

View file

@ -0,0 +1,47 @@
#include "vars.pg"
Content SimpleContent = {
size = exp(13KB); // response sizes distributed exponentially
cachable = 0%; // disable check of cache
};
// a primitive server cleverly labeled "S101"
// normally, you would specify more properties,
// but we will mostly rely on defaults for now
Server S = {
kind = "S101";
addresses = server_address; // where to create these server agents
contents = [ SimpleContent ];
direct_access = contents;
};
// a primitive robot
Robot R = {
kind = "R101";
origins = S.addresses; // where the origin servers are
addresses = client_address; // where these robot agents will be created
http_proxies = proxy_address;
req_methods = ["POST", "PUT": 10%];
post_contents = [ SimpleContent ];
put_contents = post_contents;
open_conn_lmt = 4;
};
Phase P1 = {
name = "simple_post";
goal.xactions = 100;
goal.errors = 1;
};
schedule(P1);
// commit to using these servers and robots
use(S, R);

View file

@ -0,0 +1,4 @@
addr[] server_address = ['127.0.0.1:9090'];
addr[] client_address = ['127.0.0.1'];
addr[] proxy_address = ['127.0.0.1:10086'];

5
g3proxy/debian/changelog Normal file
View file

@ -0,0 +1,5 @@
g3proxy (1.7.10-1) UNRELEASED; urgency=medium
* New upstream release.
-- G3proxy Maintainers <g3proxy-maintainers@devel.machine> Thu, 09 Mar 2023 17:43:13 +0800

1
g3proxy/debian/compat Normal file
View file

@ -0,0 +1 @@
10

13
g3proxy/debian/control Normal file
View file

@ -0,0 +1,13 @@
Source: g3proxy
Section: net
Priority: optional
Maintainer: G3proxy Maintainers <g3proxy-maintainers@devel.machine>
Build-Depends: debhelper, pkg-config, libtool, capnproto, python3-sphinx, graphviz,
libssl-dev, libc-ares-dev, liblua5.4-dev | liblua5.3-dev | liblua5.1-dev
Standards-Version: 3.9.8
Package: g3proxy
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}, systemd
Recommends: ca-certificates
Description: Generic proxy for G3 Project

View file

@ -0,0 +1 @@
g3proxy/doc/_build/html

View file

@ -0,0 +1,5 @@
usr/bin/g3proxy
usr/bin/g3proxy-ctl
usr/bin/g3proxy-ftp
usr/bin/g3proxy-lua
lib/systemd/system/

37
g3proxy/debian/rules Executable file
View file

@ -0,0 +1,37 @@
#!/usr/bin/make -f
PACKAGE_NAME := g3proxy
BUILD_PROFILE := release-lto
DEB_VERSION ?= $(shell dpkg-parsechangelog -SVersion)
LUA_FEATURE ?= $(shell scripts/package/detect_lua_feature.sh)
SSL_FEATURE ?= $(shell scripts/package/detect_openssl_feature.sh)
%:
dh $@
override_dh_auto_clean:
cargo clean --frozen --offline
rm -rf $(PACKAGE_NAME)/doc/_build
override_dh_auto_build:
G3_PACKAGE_VERSION=$(DEB_VERSION) \
cargo build --frozen --offline --profile $(BUILD_PROFILE) \
--no-default-features --features $(LUA_FEATURE),$(SSL_FEATURE),c-ares \
--package g3proxy --package g3proxy-ctl --package g3proxy-ftp --package g3proxy-lua
sh $(PACKAGE_NAME)/service/generate_systemd.sh
cd $(PACKAGE_NAME)/doc && make html
override_dh_auto_install:
dh_auto_install
install -m 755 -D target/$(BUILD_PROFILE)/g3proxy debian/tmp/usr/bin/g3proxy
install -m 755 -D target/$(BUILD_PROFILE)/g3proxy-ctl debian/tmp/usr/bin/g3proxy-ctl
install -m 755 -D target/$(BUILD_PROFILE)/g3proxy-ftp debian/tmp/usr/bin/g3proxy-ftp
install -m 755 -D target/$(BUILD_PROFILE)/g3proxy-lua debian/tmp/usr/bin/g3proxy-lua
install -m 644 -D $(PACKAGE_NAME)/service/g3proxy@.service debian/tmp/lib/systemd/system/g3proxy@.service
mkdir -p debian/tmp/usr/share/doc/$(PACKAGE_NAME)/
cp -r $(PACKAGE_NAME)/doc/_build/html debian/tmp/usr/share/doc/$(PACKAGE_NAME)/
override_dh_installchangelogs:
dh_installchangelogs $(PACKAGE_NAME)/CHANGELOG

View file

@ -0,0 +1 @@
3.0 (quilt)

20
g3proxy/doc/Makefile Normal file
View file

@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

0
g3proxy/doc/_static/.place-holder vendored Normal file
View file

0
g3proxy/doc/_templates/.place-holder vendored Normal file
View file

70
g3proxy/doc/conf.py Normal file
View file

@ -0,0 +1,70 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- Project information -----------------------------------------------------
project = 'g3proxy'
copyright = '2022, Zhang Jingqiang'
author = 'Zhang Jingqiang'
# The full version, including alpha/beta/rc tags
release = '1.7.10'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"sphinx.ext.graphviz",
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
#html_theme_options = {
# 'stickysidebar': True,
#}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# -- Custom Options ----------------------------------------------------------
# Set the master document, which contains the root toctree directive.
# The default changed from 'contents' to 'index' from sphinx version 2.0,
# so we need to explicitly set it in order to be compatible with old versions.
master_doc = 'index'
# Use 'svg' format for graphviz
graphviz_output_format='svg'

View file

@ -0,0 +1,127 @@
.. _configuration_auditor:
*******
Auditor
*******
The type for each auditor config is *map*, the keys are as follows:
name
----
**required**, **type**: str
Set the auditor name, which will can be referenced in :ref:`server config <conf_server_common_auditor>`.
protocol_inspection
-------------------
**optional**, **type**: :ref:`protocol inspection <conf_value_dpi_protocol_inspection>`
Set basic config for protocol inspection.
**default**: set with default value
server_tcp_portmap
------------------
**optional**, **type**: :ref:`server tcp portmap <conf_value_dpi_server_tcp_portmap>`
Set the portmap for protocol inspection based on server side tcp port.
**default**: set with default value
client_tcp_portmap
------------------
**optional**, **type**: :ref:`client tcp portmap <conf_value_dpi_client_tcp_portmap>`
Set the portmap for protocol inspection based on client side tcp port.
**default**: set with default value
tls_cert_generator
------------------
**optional**, **type**: :ref:`tls cert generator <conf_value_dpi_tls_cert_generator>`
Set certificate generator for TLS interception.
If not set, TLS interception will be disabled.
**default**: not set
tls_interception_client
-----------------------
**optional**, **type**: :ref:`tls interception client <conf_value_dpi_tls_interception_client>`
Set the tls client config for server handshake in TLS interception.
**default**: set with default value
log_uri_max_chars
-----------------
**optional**, **type**: usize
Set the max chars for the log of URI.
**default**: 1024
h1_interception
---------------
**optional**, **type**: :ref:`h1 interception <conf_value_dpi_h1_interception>`
Set http 1.x interception config.
**default**: set with default value
h2_interception
---------------
**optional**, **type**: :ref:`h2 interception <conf_value_dpi_h2_interception>`
Set http 2.0 interception config.
**default**: set with default value
icap_reqmod_service
-------------------
**optional**, **type**: :ref:`icap service config <conf_value_audit_icap_service_config>`
Set the ICAP REQMOD service config.
**default**: not set
.. versionadded:: 1.7.3
icap_respmod_service
--------------------
**optional**, **type**: :ref:`icap service config <conf_value_audit_icap_service_config>`
Set the ICAP RESPMOD service config.
**default**: not set
.. versionadded:: 1.7.3
.. _conf_auditor_application_audit_ratio:
application_audit_ratio
-----------------------
**optional**, **type**: :ref:`random ratio <conf_value_random_ratio>`
Set the application audit (like ICAP REQMOD/RESPMOD) ratio for incoming requests.
This also controls whether protocol inspection is really enabled for a specific request.
User side settings may override this.
**default**: 1.0
.. versionadded:: 1.7.4

View file

@ -0,0 +1,97 @@
.. _configuration_escaper_direct_fixed:
direct_fixed
============
This escaper will access the target upstream from local machine directly.
The following interfaces are supported:
* tcp connect
* udp relay
* udp connect
* http(s) forward
* ftp over http
The following common keys are supported:
* :ref:`shared_logger <conf_escaper_common_shared_logger>`
* :ref:`resolver <conf_escaper_common_resolver>`, **required**
* :ref:`resolve_strategy <conf_escaper_common_resolve_strategy>`
The user custom resolve strategy will be taken into account.
* :ref:`tcp_sock_speed_limit <conf_escaper_common_tcp_sock_speed_limit>`
* :ref:`udp_sock_speed_limit <conf_escaper_common_udp_sock_speed_limit>`
* :ref:`no_ipv4 <conf_escaper_common_no_ipv4>`
* :ref:`no_ipv6 <conf_escaper_common_no_ipv6>`
* :ref:`tcp_connect <conf_escaper_common_tcp_connect>`
The user tcp connect params will be taken into account.
* :ref:`tcp_misc_opts <conf_escaper_common_tcp_misc_opts>`
* :ref:`udp_misc_opts <conf_escaper_common_udp_misc_opts>`
* :ref:`extra_metrics_tags <conf_escaper_common_extra_metrics_tags>`
bind_ip
-------
**optional**, **type**: :ref:`ip addr str <conf_value_ip_addr_str>` | seq
Set the bind ip address(es) for sockets.
For *seq* value, each of its element must be :ref:`ip addr str <conf_value_ip_addr_str>`.
Only random select is supported. Use *route* type escapers if is doesn't meet your needs.
**default**: not set
egress_network_filter
---------------------
**optional**, **type**: :ref:`egress network acl rule <conf_value_egress_network_acl_rule>`
Set the network filter for the (resolved) remote ip address.
**default**: all permitted except for loop-back and link-local addresses
happy_eyeballs
--------------
**optional**, **type**: :ref:`happy eyeballs <conf_value_happy_eyeballs>`
Set the HappyEyeballs config.
**default**: default HappyEyeballs config
.. versionadded:: 1.5.3
tcp_keepalive
-------------
**optional**, **type**: :ref:`tcp keepalive <conf_value_tcp_keepalive>`
Set tcp keepalive.
The tcp keepalive set in user config will be taken into account.
**default**: no keepalive set
resolve_redirection
-------------------
**optional**, **type**: :ref:`resolve redirection <conf_value_resolve_redirection>`
Set the dns redirection rules at escaper level.
**default**: not set
enable_path_selection
---------------------
**optional**, **type**: bool
Weather we should enable path selection.
.. note:: Path selection on server side should be open, or this option will have no effects.
**default**: false

View file

@ -0,0 +1,156 @@
.. _configuration_escaper_direct_float:
************
direct_float
************
This escaper will access the target upstream from local machine directly. The local bind ip, which is required,
can be set via the `publish` rpc method.
The following interfaces are supported:
* tcp connect
* http(s) forward
The Cap'n Proto RPC publish command is supported on this escaper, the published data should be a map, with the keys:
* ipv4
Set the IPv4 bind ip address(es).
The value could be an array of or just one :ref:`bind ip <config_escaper_dynamic_bind_ip>`.
* ipv6
Set the IPv6 bind ip address(es).
The value could be an array of or just one :ref:`bind ip <config_escaper_dynamic_bind_ip>`.
There is no path selection support for this escaper.
Config Keys
===========
The following common keys are supported:
* :ref:`shared_logger <conf_escaper_common_shared_logger>`
* :ref:`resolver <conf_escaper_common_resolver>`, **required**
* :ref:`resolve_strategy <conf_escaper_common_resolve_strategy>`
The user custom resolve strategy will be taken into account.
* :ref:`tcp_sock_speed_limit <conf_escaper_common_tcp_sock_speed_limit>`
* :ref:`udp_sock_speed_limit <conf_escaper_common_udp_sock_speed_limit>`
* :ref:`no_ipv4 <conf_escaper_common_no_ipv4>`
* :ref:`no_ipv6 <conf_escaper_common_no_ipv6>`
* :ref:`tcp_connect <conf_escaper_common_tcp_connect>`
The user tcp connect params will be taken into account.
* :ref:`tcp_misc_opts <conf_escaper_common_tcp_misc_opts>`
* :ref:`extra_metrics_tags <conf_escaper_common_extra_metrics_tags>`
cache_ipv4
----------
**recommend**, **type**: :ref:`file path <conf_value_file_path>`
Set the cache file for published IPv4 IP Address(es).
It is recommended to set this as the fetch of peers at startup may be finished after the first batch of requests.
The file will be created if not existed.
**default**: not set
cache_ipv6
----------
**recommend**, **type**: :ref:`file path <conf_value_file_path>`
Set the cache file for published IPv6 IP Address(es).
It is recommended to set this as the fetch of peers at startup may be finished after the first batch of requests.
The file will be created if not existed.
**default**: not set
egress_network_filter
---------------------
**optional**, **type**: :ref:`egress network acl rule <conf_value_egress_network_acl_rule>`
Set the network filter for the (resolved) remote ip address.
**default**: all permitted except for loopback and link-local addresses
happy_eyeballs
--------------
**optional**, **type**: :ref:`happy eyeballs <conf_value_happy_eyeballs>`
Set the HappyEyeballs config.
**default**: default HappyEyeballs config
.. versionadded:: 1.5.3
tcp_keepalive
-------------
**optional**, **type**: :ref:`tcp keepalive <conf_value_tcp_keepalive>`
Set tcp keepalive.
The tcp keepalive set in user config will be taken into account.
**default**: 60s
resolve_redirection
-------------------
**optional**, **type**: :ref:`resolve redirection <conf_value_resolve_redirection>`
Set the dns redirection rules at escaper level.
**default**: not set
.. _config_escaper_dynamic_bind_ip:
Bind IP
=======
We use json string to represent a dynamic bind ip, with a map type as root element.
* ip
**required**, **type**: :ref:`ip addr str <conf_value_ip_addr_str>`
Set the IP address. The address family should match the type of the publish key described above.
* isp
**optional**, **type**: str
ISP for the egress ip address.
* eip
**optional**, **type**: :ref:`ip addr str <conf_value_ip_addr_str>`
The egress ip address from external view.
* area
**optional**, **type**: :ref:`egress area <conf_value_egress_area>`
Area of the egress ip address.
* expire
**optional**, **type**: :ref:`rfc3339 datetime str <conf_value_rfc3339_datetime_str>`
Set the expire time of this dynamic ip.
**default**: not set
If all optional fields can be set with the default value, the root element can be just a *ip*.

View file

@ -0,0 +1,16 @@
.. _configuration_escaper_dummy_deny:
**********
dummy_deny
**********
This is the dummy escaper designed to deny all requests.
There is no path selection support for this escaper.
Config Keys
===========
The following common keys are supported:
* :ref:`extra_metrics_tags <conf_escaper_common_extra_metrics_tags>`

View file

@ -0,0 +1,208 @@
.. _configuration_escaper:
*******
Escaper
*******
The type for each escaper config is *map*, with two always required keys:
* *name*, which specify the name of the escaper.
* *type*, which specify the real type of the escaper, decides how to parse other keys.
There are many types of escaper, each with a section below.
Escapers
========
.. toctree::
:maxdepth: 2
dummy_deny
direct_fixed
direct_float
proxy_float
proxy_http
proxy_https
proxy_socks5
route_mapping
route_query
route_resolved
route_select
route_upstream
route_client
trick_float
Common Keys
===========
This section describes the common keys, they may be used by many escapers.
.. _conf_escaper_common_shared_logger:
shared_logger
-------------
**optional**, **type**: ascii
Set the escaper to use a logger running on a shared thread.
**default**: not set
.. _conf_escaper_common_resolver:
resolver
--------
**type**: str
Set the resolver to use for this escaper.
If the specified resolver doesn't exist in configure, a default DenyAll resolver will be used.
.. _conf_escaper_common_resolve_strategy:
resolve_strategy
-----------------
**optional**, **type**: :ref:`resolve strategy <conf_value_resolve_strategy>`
Set the resolve strategy.
.. _conf_escaper_common_tcp_sock_speed_limit:
tcp_sock_speed_limit
--------------------
**optional**, **type**: :ref:`tcp socket speed limit <conf_value_tcp_sock_speed_limit>`
Set speed limit for each tcp socket.
**default**: no limit, **alias**: tcp_conn_speed_limit | tcp_conn_limit
.. versionchanged:: 1.4.0 changed name to tcp_sock_speed_limit
.. _conf_escaper_common_udp_sock_speed_limit:
udp_sock_speed_limit
--------------------
**optional**, **type**: :ref:`udp socket speed limit <conf_value_udp_sock_speed_limit>`
Set speed limit for each udp socket.
**default**: no limit, **alias**: udp_relay_speed_limit | udp_relay_limit
.. versionchanged:: 1.4.0 changed name to udp_sock_speed_limit
.. _conf_escaper_common_no_ipv4:
no_ipv4
-------
**optional**, **type**: bool
Disable IPv4. This setting should be compatible with :ref:`resolve_strategy <conf_escaper_common_resolve_strategy>`.
**default**: false
.. _conf_escaper_common_no_ipv6:
no_ipv6
-------
**optional**, **type**: bool
Disable IPv6. This setting should be compatible with :ref:`resolve_strategy <conf_escaper_common_resolve_strategy>`.
**default**: false
.. _conf_escaper_common_tcp_connect:
tcp_connect
-----------
**optional**, **type**: :ref:`tcp connect <conf_value_tcp_connect>`
Set tcp connect params.
.. note:: For *direct* type escapers, the user level tcp connect params will be taken to limit the final value.
.. _conf_escaper_common_tcp_misc_opts:
tcp_misc_opts
-------------
**optional**, **type**: :ref:`tcp misc sock opts <conf_value_tcp_misc_sock_opts>`
Set misc tcp socket options.
**default**: not set, nodelay is default enabled
.. _conf_escaper_common_udp_misc_opts:
udp_misc_opts
-------------
**optional**, **type**: :ref:`udp misc sock opts <conf_value_udp_misc_sock_opts>`
Set misc udp socket options.
**default**: not set
.. _conf_escaper_common_default_next:
default_next
------------
**required**, **type**: str
Set the default next escaper for *route* type escapers.
.. _conf_escaper_common_pass_proxy_userid:
pass_proxy_userid
-----------------
**optional**, **type**: bool
Set if we should pass userid (username) to next proxy.
If set, the native basic auth method will be used when negotiation with next proxy, and the username field will be set
to the real username, the password field set to our package name (g3proxy if not forked).
**default**: false
.. note:: This will conflict with the real auth of next proxy.
.. _conf_escaper_common_use_proxy_protocol:
use_proxy_protocol
------------------
**optional**, **type**: :ref:`proxy protocol version <conf_value_proxy_protocol_version>`
Set the version of PROXY protocol we use for outgoing tcp connections.
**default**: not set, which means PROXY protocol won't be used
.. _conf_escaper_common_peer_negotiation_timeout:
peer_negotiation_timeout
------------------------
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
Set the negotiation timeout for next proxy peers.
**default**: 10s
.. _conf_escaper_common_extra_metrics_tags:
extra_metrics_tags
------------------
**optional**, **type**: :ref:`static metrics tags <conf_value_static_metrics_tags>`
Set extra metrics tags that should be added to escaper stats and user stats already with escaper tags added.
**default**: not set

View file

@ -0,0 +1,391 @@
.. _configuration_escaper_proxy_float:
***********
proxy_float
***********
This escaper provide the capability to access the target upstream through dynamic remote proxies.
The following interfaces are supported:
* tcp connect
* http(s) forward
The following remote proxy protocols are supported:
* Http Proxy
* Socks5 Proxy
The Cap'n Proto RPC publish command is supported on this escaper, the published data should be an array of
or just one :ref:`peer <config_escaper_dynamic_peer>`.
There is no path selection support for this escaper.
Config Keys
===========
The following common keys are supported:
* :ref:`shared_logger <conf_escaper_common_shared_logger>`
* :ref:`tcp_sock_speed_limit <conf_escaper_common_tcp_sock_speed_limit>`
* :ref:`tcp_misc_opts <conf_escaper_common_tcp_misc_opts>`
* :ref:`peer negotiation timeout <conf_escaper_common_peer_negotiation_timeout>`
* :ref:`extra_metrics_tags <conf_escaper_common_extra_metrics_tags>`
source
------
**required**, **type**: :ref:`url str <conf_value_url_str>` | map | null
Set the fetch source for peers.
We support many type of sources. The type is detected by reading the *scheme* field of url,
or the *type* key of the map. See :ref:`sources <config_escaper_dynamic_source>` for all supported type of sources.
cache
-----
**recommend**, **type**: :ref:`file path <conf_value_file_path>`
Set the cache file.
It is recommended to set this as the fetch of peers at startup may be finished after the first batch of requests.
The file will be created if not existed.
**default**: not set
refresh_interval
----------------
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
Set the refresh interval to update peers from the configured source.
**default**: 1s
bind_ipv4
---------
**optional**, **type**: :ref:`ipv4 addr str <conf_value_ipv4_addr_str>`
Set the bind ip address for inet sockets.
**default**: not set
bind_ipv6
---------
**optional**, **type**: :ref:`ipv6 addr str <conf_value_ipv6_addr_str>`
Set the bind ip address for inet6 sockets.
**default**: not set
tls_client
----------
**optional**, **type**: bool | :ref:`openssl tls client config <conf_value_openssl_tls_client_config>`
Enable https peer, and set TLS parameters for this local TLS client.
If set to true or empty map, a default config is used.
**default**: not set
tcp_connect_timeout
-------------------
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
Set the tcp connect application level timeout value.
**default**: 30s
tcp_keepalive
-------------
**optional**, **type**: :ref:`tcp keepalive <conf_value_tcp_keepalive>`
Set tcp keepalive.
The tcp keepalive set in user config won't be taken into account.
**default**: 60s
expire_guard_duration
---------------------
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
If the peer has an expire value, we won't connect to it if we can reach the expire time after adding this value.
**default**: 5s
.. _config_escaper_dynamic_source:
Sources
=======
For *map* format, the **type** key should always be set.
passive
-------
Do not fetch peers. Only publish is needed.
The root value of source may be set to *null* to use passive source.
redis
-----
Fetch peers from a redis db.
The keys used in the *map* format are:
* addr
**required**, **type**: :ref:`upstream str <conf_value_upstream_str>`
Set the address of the redis instance. The default port is 6379 which can be omitted.
* db
**optional**, **type**: int
Set the database.
**default**: 0
* username
**optional**, **type**: str
Set the username for redis 6 database if needed. It is required if connect to an ACL enabled redis 6 database.
**default**: not set
* password
**optional**, **type**: str
Set the password.
**default**: not set
* connect_timeout
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
Set the connect timeout.
**default**: 5s
* read_timeout
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
Set the timeout for redis read operation.
**default**: 2s
* sets_key
**required**, **type**: str
Set the key for the sets that store the peers. Each string record in the set is a single peer.
See :ref:`peers <config_escaper_dynamic_peer>` for its formats.
For *url* str values, the format is:
redis://[username][:<password>@]<addr>/<db>?sets_key=<sets_key>
redis_cluster
-------------
Fetch peers from a redis cluster.
The value should be a *map*, with these keys:
* initial_nodes
**required**, **type**: :ref:`upstream str <conf_value_upstream_str>`
Set the address of the startup nodes.
* username
**optional**, **type**: str
Set the username.
.. versionadded:: 1.7.0
* password
**optional**, **type**: str
Set the password.
**default**: not set
* read_timeout
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
Set the timeout for redis read operation.
**default**: 2s
* sets_key
**required**, **type**: str
Set the key for the sets that store the peers. Each string record in the set is a single peer.
See :ref:`peers <config_escaper_dynamic_peer>` for its formats.
.. _config_escaper_dynamic_peer:
Peers
=====
We use json string to represent a peer, with a map type as root element.
Common keys
-----------
* type
**required**, **type**: str
It tells us the peer type.
* addr
**required**, **type**: :ref:`sockaddr str <conf_value_sockaddr_str>`
Set the socket address we can connect to the peer.
No domain name is allowed here.
* isp
**optional**, **type**: str
ISP for the egress ip address.
* eip
**optional**, **type**: :ref:`ip addr str <conf_value_ip_addr_str>`
The egress ip address from external view.
* area
**optional**, **type**: :ref:`egress area <conf_value_egress_area>`
Area of the egress ip address.
* expire
**optional**, **type**: :ref:`rfc3339 datetime str <conf_value_rfc3339_datetime_str>`
Set the expire time for this peer.
* tcp_sock_speed_limit
**optional**, **type**: :ref:`tcp socket speed limit <conf_value_tcp_sock_speed_limit>`
Set the speed limit for each tcp connections to this peer.
.. versionchanged:: 1.4.0 changed name to tcp_sock_speed_limit
The following types are supported:
http
----
* username
**optional**, **type**: :ref:`username <conf_value_username>`
Set the username for HTTP basic auth.
* password
**optional**, **type**: :ref:`password <conf_value_password>`
Set the password for HTTP basic auth.
* http_connect_rsp_header_max_size
**optional**, **type**: :ref:`humanize usize <conf_value_humanize_usize>`
Set the max header size for received CONNECT response.
**default**: 4KiB
* extra_append_headers
**optional**, **type**: map
Set extra headers append to the requests sent to upstream.
The key should be the header name, both the key and the value should be in ascii string type.
.. note:: No duplication check is done here, use it with caution.
https
-----
* username
**optional**, **type**: :ref:`username <conf_value_username>`
Set the username for HTTP basic auth.
* password
**optional**, **type**: :ref:`password <conf_value_password>`
Set the password for HTTP basic auth.
* tls_name
**optional**, **type**: :ref:`tls name <conf_value_tls_name>`
Set the tls server name for server certificate verification.
.. note:: IP address is not supported by now. So if not set, the connection will fail.
**default**: not set
* http_connect_rsp_header_max_size
**optional**, **type**: :ref:`humanize usize <conf_value_humanize_usize>`
Set the max header size for received CONNECT response.
**default**: 4KiB
* extra_append_headers
**optional**, **type**: map
Set extra headers append to the requests sent to upstream.
The key should be the header name, both the key and the value should be in ascii string type.
.. note:: No duplication check is done here, use it with caution.
socks5
------
* username
**optional**, **type**: :ref:`username <conf_value_username>`
Set the username for Socks5 User auth.
* password
**optional**, **type**: :ref:`password <conf_value_password>`
Set the password for Socks5 User auth.

View file

@ -0,0 +1,113 @@
.. _configuration_escaper_proxy_http:
proxy_http
==========
This escaper will access the target upstream through another http proxy.
The following interfaces are supported:
* tcp connect
* http(s) forward
There is no path selection support for this escaper.
The following common keys are supported:
* :ref:`shared_logger <conf_escaper_common_shared_logger>`
* :ref:`resolver <conf_escaper_common_resolver>`, **required** only if *proxy_addr* is domain
* :ref:`resolve_strategy <conf_escaper_common_resolve_strategy>`
* :ref:`tcp_sock_speed_limit <conf_escaper_common_tcp_sock_speed_limit>`
* :ref:`no_ipv4 <conf_escaper_common_no_ipv4>`
* :ref:`no_ipv6 <conf_escaper_common_no_ipv6>`
* :ref:`tcp_connect <conf_escaper_common_tcp_connect>`
* :ref:`tcp_misc_opts <conf_escaper_common_tcp_misc_opts>`
* :ref:`pass_proxy_userid <conf_escaper_common_pass_proxy_userid>`
* :ref:`use_proxy_protocol <conf_escaper_common_use_proxy_protocol>`
* :ref:`peer negotiation timeout <conf_escaper_common_peer_negotiation_timeout>`
* :ref:`extra_metrics_tags <conf_escaper_common_extra_metrics_tags>`
proxy_addr
----------
**required**, **type**: :ref:`upstream str <conf_value_upstream_str>` | seq
Set the target proxy address. The default port is 3128 which can be omitted.
For *seq* value, each of its element must be :ref:`weighted upstream addr <conf_value_weighted_upstream_addr>`.
proxy_addr_pick_policy
----------------------
**optional**, **type**: :ref:`selective pick policy <conf_value_selective_pick_policy>`
Set the policy to select next proxy address.
The key for rendezvous/jump hash is *<client-ip>[-<username>]-<upstream-host>*.
**default**: random
proxy_username
--------------
**optional**, **type**: :ref:`username <conf_value_username>`
Set the proxy username. The Basic auth scheme is used by default.
.. note::
Conflict with :ref:`pass_proxy_userid <conf_escaper_common_pass_proxy_userid>`
proxy_password
--------------
**optional**, **type**: :ref:`password <conf_value_password>`
Set the proxy password. Required if username is present.
bind_ipv4
---------
**optional**, **type**: :ref:`ipv4 addr str <conf_value_ipv4_addr_str>`
Set the bind ip address for inet sockets.
**default**: not set
bind_ipv6
---------
**optional**, **type**: :ref:`ipv6 addr str <conf_value_ipv6_addr_str>`
Set the bind ip address for inet6 sockets.
**default**: not set
http_forward_capability
-----------------------
**optional**, **type**: :ref:`http forward capability <conf_value_http_forward_capability>`
Set the http forward capability if the next proxy.
**default**: all capability disabled
http_connect_rsp_header_max_size
--------------------------------
**optional**, **type**: :ref:`humanize usize <conf_value_humanize_usize>`
Set the max header size for received CONNECT response.
**default**: 4KiB
tcp_keepalive
-------------
**optional**, **type**: :ref:`tcp keepalive <conf_value_tcp_keepalive>`
Set tcp keepalive.
The tcp keepalive set in user config won't be taken into account.
**default**: no keepalive set

View file

@ -0,0 +1,134 @@
.. _configuration_escaper_proxy_https:
proxy_https
===========
This escaper will access the target upstream through another https proxy.
The following interfaces are supported:
* tcp connect
* http(s) forward
There is no path selection support for this escaper.
The following common keys are supported:
* :ref:`shared_logger <conf_escaper_common_shared_logger>`
* :ref:`resolver <conf_escaper_common_resolver>`, **required** only if *proxy_addr* is domain
* :ref:`resolve_strategy <conf_escaper_common_resolve_strategy>`
* :ref:`tcp_sock_speed_limit <conf_escaper_common_tcp_sock_speed_limit>`
* :ref:`no_ipv4 <conf_escaper_common_no_ipv4>`
* :ref:`no_ipv6 <conf_escaper_common_no_ipv6>`
* :ref:`tcp_connect <conf_escaper_common_tcp_connect>`
* :ref:`tcp_misc_opts <conf_escaper_common_tcp_misc_opts>`
* :ref:`pass_proxy_userid <conf_escaper_common_pass_proxy_userid>`
* :ref:`use_proxy_protocol <conf_escaper_common_use_proxy_protocol>`
* :ref:`peer negotiation timeout <conf_escaper_common_peer_negotiation_timeout>`
* :ref:`extra_metrics_tags <conf_escaper_common_extra_metrics_tags>`
proxy_addr
----------
**required**, **type**: :ref:`upstream str <conf_value_upstream_str>` | seq
Set the target proxy address. The default port is 3128 which can be omitted.
For *seq* value, each of its element must be :ref:`weighted upstream addr <conf_value_weighted_upstream_addr>`.
proxy_addr_pick_policy
----------------------
**optional**, **type**: :ref:`selective pick policy <conf_value_selective_pick_policy>`
Set the policy to select next proxy address.
The key for rendezvous/jump hash is *<client-ip>[-<username>]-<upstream-host>*.
**default**: random
tls_client
----------
**required**, **type**: :ref:`openssl tls client config <conf_value_openssl_tls_client_config>`
Set TLS parameters for this local TLS client.
If set to empty map, a default config is used.
tls_name
--------
**optional**, **type**: :ref:`tls name <conf_value_tls_name>`
Set the tls server name to verify tls certificate for all peers.
If not set, the host part of each peer will be used.
.. note:: IP address is not supported by now
**default**: not set
proxy_username
--------------
**optional**, **type**: :ref:`username <conf_value_username>`
Set the proxy username. The Basic auth scheme is used by default.
.. note::
Conflict with :ref:`pass_proxy_userid <conf_escaper_common_pass_proxy_userid>`
proxy_password
--------------
**optional**, **type**: :ref:`password <conf_value_password>`
Set the proxy password. Required if username is present.
bind_ipv4
---------
**optional**, **type**: :ref:`ipv4 addr str <conf_value_ipv4_addr_str>`
Set the bind ip address for inet sockets.
**default**: not set
bind_ipv6
---------
**optional**, **type**: :ref:`ipv6 addr str <conf_value_ipv6_addr_str>`
Set the bind ip address for inet6 sockets.
**default**: not set
http_forward_capability
-----------------------
**optional**, **type**: :ref:`http forward capability <conf_value_http_forward_capability>`
Set the http forward capability if the next proxy.
**default**: all capability disabled
http_connect_rsp_header_max_size
--------------------------------
**optional**, **type**: :ref:`humanize usize <conf_value_humanize_usize>`
Set the max header size for received CONNECT response.
**default**: 4KiB
tcp_keepalive
-------------
**optional**, **type**: :ref:`tcp keepalive <conf_value_tcp_keepalive>`
Set tcp keepalive.
The tcp keepalive set in user config won't be taken into account.
**default**: no keepalive set

View file

@ -0,0 +1,93 @@
.. _configuration_escaper_proxy_socks5:
proxy_socks5
============
This escaper will access the target upstream through another http proxy.
The following interfaces are supported:
* tcp connect
* udp_relay
* udp_connect
* http(s) forward
There is no path selection support for this escaper.
The following common keys are supported:
* :ref:`shared_logger <conf_escaper_common_shared_logger>`
* :ref:`resolver <conf_escaper_common_resolver>`, **required** only if *proxy_addr* is domain
* :ref:`resolve_strategy <conf_escaper_common_resolve_strategy>`
* :ref:`tcp_sock_speed_limit <conf_escaper_common_tcp_sock_speed_limit>`
* :ref:`udp_sock_speed_limit <conf_escaper_common_udp_sock_speed_limit>`
* :ref:`no_ipv4 <conf_escaper_common_no_ipv4>`
* :ref:`no_ipv6 <conf_escaper_common_no_ipv6>`
* :ref:`tcp_connect <conf_escaper_common_tcp_connect>`
* :ref:`tcp_misc_opts <conf_escaper_common_tcp_misc_opts>`
* :ref:`udp_misc_opts <conf_escaper_common_udp_misc_opts>`
* :ref:`peer negotiation timeout <conf_escaper_common_peer_negotiation_timeout>`
* :ref:`extra_metrics_tags <conf_escaper_common_extra_metrics_tags>`
proxy_addr
----------
**required**, **type**: :ref:`upstream str <conf_value_upstream_str>` | seq
Set the target proxy address. The default port is 1080 which can be omitted.
For *seq* value, each of its element must be :ref:`weighted upstream addr <conf_value_weighted_upstream_addr>`.
proxy_addr_pick_policy
----------------------
**optional**, **type**: :ref:`selective pick policy <conf_value_selective_pick_policy>`
Set the policy to select next proxy address.
The key for rendezvous/jump hash is *<client-ip>[-<username>]-<upstream-host>*.
**default**: random
proxy_username
--------------
**optional**, **type**: :ref:`username <conf_value_username>`
Set the proxy username. The User auth scheme is used by default.
proxy_password
--------------
**optional**, **type**: :ref:`password <conf_value_password>`
Set the proxy password. Required if username is present.
bind_ipv4
---------
**optional**, **type**: :ref:`ipv4 addr str <conf_value_ipv4_addr_str>`
Set the bind ip address for inet sockets.
**default**: not set
bind_ipv6
---------
**optional**, **type**: :ref:`ipv6 addr str <conf_value_ipv6_addr_str>`
Set the bind ip address for inet6 sockets.
**default**: not set
tcp_keepalive
-------------
**optional**, **type**: :ref:`tcp keepalive <conf_value_tcp_keepalive>`
Set tcp keepalive.
The tcp keepalive set in user config won't be taken into account.
**default**: 60s

View file

@ -0,0 +1,60 @@
.. _configuration_escaper_route_client:
route_client
============
.. versionadded:: 1.1.3
This escaper allows to select a next escaper based on rules on client address.
There is no path selection support for this escaper.
The following common keys are supported:
* :ref:`default_next <conf_escaper_common_default_next>`
exact_match
-----------
**optional**, **type**: seq
If the client ip exactly match the one in the rules, that escaper will be selected.
Each rule is in *map* format, with two keys:
* next
**required**, **type**: str
Set the next escaper.
* ips
**optional**, **type**: seq
Each element should be :ref:`ip addr str <conf_value_ip_addr_str>`.
An ip should not be set duplicated in rules for different next escapers.
subnet_match
------------
**optional**, **type**: seq
If the client ip match the longest subnet in the rule, that escaper will be selected.
Each rule is in *map* format, with two keys:
* next
**required**, **type**: str
Set the next escaper.
* subnets
**optional**, **type**: seq
Each element should be :ref:`ip network str <conf_value_ip_network_str>`.
A subnet should not be set duplicated in rules for different next escapers.

View file

@ -0,0 +1,19 @@
.. _configuration_escaper_route_mapping:
route_mapping
=============
This escaper allows to select a next escaper based on the user specified path selection index.
If no index can be get from the path selection method, the default random one will be used.
No common keys are supported.
next
----
**required**, **type**: seq
This set all the next escapers. Each element should be the name of the target float escaper.
.. note:: No duplication of next escapers is allowed.

View file

@ -0,0 +1,132 @@
.. _configuration_escaper_route_query:
route_query
===========
This escaper allows to select a next escaper based on query to another service through a UDP socket.
There is no path selection support for this escaper.
No common keys are supported.
.. _configuration_escaper_route_query_fallback_node:
fallback_node
-------------
**required**, **type**: string
Set the fallback escaper name.
query_allowed_next
------------------
**required**, **type**: seq
Set all the next escapers those are allowed to use in the query result. Each element should be the next escaper name.
If the selected escaper name is not found in this list, the fallback escaper will be used.
.. _configuration_escaper_route_query_pass_client_ip:
query_pass_client_ip
--------------------
**optional**, **type**: bool
Set whether we should also send client_ip in the query message.
**default**: false
cache_request_batch_count
-------------------------
**optional**, **type**: usize
Set how many consequent query requests we should handle in the cache runtime before yield out to the next loop.
**default**: 10
cache_request_timeout
---------------------
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
Set how many time we should spend on waiting responses from cache runtime after sending query request.
The fallback node will be used if timeout occur.
**default**: 100ms
cache_pick_policy
-----------------
**optional**, **type**: :ref:`selective pick policy <conf_value_selective_pick_policy>`
Set the policy to select next proxy address from the query result.
The key for rendezvous/jump hash is *<client-ip>*.
**default**: rendezvous
query_peer_addr
---------------
**optional**, **type**: :ref:`sockaddr str <conf_value_sockaddr_str>`
Set the socket address of the service that we should send queries to.
**default**: 127.0.0.1:1053
query_socket_buffer
-------------------
**optional**, **type**: :ref:`socket buffer config <conf_value_socket_buffer_config>`
Set the socket buffer config for the UDP socket we will use.
**default**: not set
query_wait_timeout
------------------
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
Set how many time we should wait for response from the peer service.
Empty reply will be send back to cache runtime if timeout occur.
**default**: 10s
.. _configuration_escaper_route_query_protective_cache_ttl:
protective_cache_ttl
--------------------
**optional**, **type**: usize
Set the cache ttl for failed or zero-ttl query results.
**default**: 10
maximum_cache_ttl
-----------------
**optional**, **type**: usize
Set the maximum cache ttl for query results.
**default**: 1800
.. _configuration_escaper_route_query_vanish_after_expired:
cache_vanish_wait
-----------------
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
Clean the record from the cache if it has been expired such many time.
We still cache expired records some time before clean them as a new query will spend more time and the new query result
will have a big chance to be the same with the expired one.
**default**: 30s, **alias**: vanish_after_expire

View file

@ -0,0 +1,52 @@
.. _configuration_escaper_route_resolved:
route_resolved
==============
This escaper allows to select a next escaper based on rules on the resolved upstream ip address.
There is no path selection support for this escaper.
The resolve method in Happy Eyeballs algorithm is used.
The following common keys are supported:
* :ref:`resolver <conf_escaper_common_resolver>`, **required**
* :ref:`resolve_strategy <conf_escaper_common_resolve_strategy>`
* :ref:`default_next <conf_escaper_common_default_next>`
lpm_match
---------
**optional**, **type**: seq
If the resolved upstream ip address lpm match the network in the rules, that escaper will be selected.
Each rule is in *map* format, with two keys:
* next
**required**, **type**: str
Set the next escaper.
* networks
**optional**, **type**: seq
Each element should be valid network string. Both IPv4 and IPv6 are supported.
Each network should not be set for different next escapers.
resolution_delay
----------------
**optional**, **type**: :ref:`humanize duration <conf_value_humanize_duration>`
The resolution delay time for the wait of the preferred address family after another one is returned.
The meaning is the same as *resolution_delay* field in :ref:`happy eyeballs <conf_value_happy_eyeballs>`.
**default**: 50ms
.. versionadded:: 1.5.5

View file

@ -0,0 +1,30 @@
.. _configuration_escaper_route_select:
route_select
============
This escaper allows to select a next escaper based on the specified pick policy.
There is no path selection support for this escaper.
No common keys are supported.
next_nodes
----------
**required**, **type**: string | seq
Set the next escaper(s) those can be selected.
For *seq* value, each of its element must be :ref:`weighted name str <conf_value_weighted_name_str>`.
next_pick_policy
----------------
**optional**, **type**: :ref:`selective pick policy <conf_value_selective_pick_policy>`
Set the policy to select next proxy address.
The key for rendezvous/jump hash is *<client-ip>[-<username>]-<upstream-host>*.
**default**: rendezvous

View file

@ -0,0 +1,105 @@
.. _configuration_escaper_route_upstream:
route_upstream
==============
This escaper allows to select a next escaper based on rules on upstream address.
There is no path selection support for this escaper.
The following common keys are supported:
* :ref:`default_next <conf_escaper_common_default_next>`
exact_match
-----------
**optional**, **type**: seq
If the host part of upstream address exactly match the one in the rules, that escaper will be selected.
Each rule is in *map* format, with two keys:
* next
**required**, **type**: str
Set the next escaper.
* hosts
**optional**, **type**: seq
Each element should be :ref:`host <conf_value_host>`.
A host should not be set duplicated in rules for different next escapers.
subnet_match
------------
**optional**, **type**: seq
If the host is an IP address and match the longest subnet in the rule, that escaper will be selected.
Each rule is in *map* format, with two keys:
* next
**required**, **type**: str
Set the next escaper.
* subnets
**optional**, **type**: seq
Each element should be :ref:`ip network str <conf_value_ip_network_str>`.
A subnet should not be set duplicated in rules for different next escapers.
child_match
-----------
**optional**, **type**: seq
If the domain of the upstream address is children of domains in the rules, that escaper will be selected.
Each rule is in *map* format, with two keys:
* next
**required**, **type**: str
Set the next escaper.
* domains
**optional**, **type**: seq
Each element should be :ref:`domain <conf_value_domain>`.
Each domain should not be set for different next escapers.
radix_match
-----------
**optional**, **type**: seq
If the domain of the upstream address exactly match the one of the domain suffixes in the rules,
that escaper will be selected.
Each rule is in *map* format, with two keys:
* next
**required**, **type**: str
Set the next escaper.
* suffixes
**optional**, **type**: seq
Each element should be :ref:`domain <conf_value_domain>`.
Each domain suffix should not be set for different next escapers.

Some files were not shown because too many files have changed in this diff Show more