Jump to content

Tgro87

Members
  • Posts

    28
  • Joined

  • Last visited

Posts posted by Tgro87

  1. 23 minutes ago, iNow said:

    You literally just copy/pasted the link cited in the OP  🤦‍♂️

    Interesting view pal,

    You're right, I did copy and paste the link. But wouldn't it be truly bullshit if I had just written a bunch of words that sounded profound, but didn't actually engage with the original text? That would be the real act of intellectual dishonesty, wouldn't it? I'm all for respectful discourse, but sometimes you just gotta cut to the chase.

    43 minutes ago, iNow said:

    You literally just copy/pasted the link cited in the OP  🤦‍♂️

     

  2. ChatGPT: "Bullshit, But at Least It's Entertaining..." A Humorous Critique of "ChatGPT is Bullshit"

    Abstract: The authors of "ChatGPT is Bullshit" (Hicks et al., 2024) seem to have stumbled into a particularly deep, and perhaps slightly self-aggrandizing, philosophical rabbit hole. While they're technically correct that ChatGPT, and other large language models, are not actually concerned with "truth" in the way a human mind is, their insistence on labeling it "bullshit" feels more like a tweed-jacketed academic's attempt to assert intellectual superiority than a meaningful contribution to the discourse on AI ethics. This paper will take a humorous look at the "ChatGPT is Bullshit" argument, poking fun at the authors' philosophical acrobatics while acknowledging the very real need for ethical guidelines in the development and deployment of AI.

    Introduction: It seems that the scientific community is in a tizzy over AI. We're either heralding it as the harbinger of a utopian future or lamenting its imminent takeover of the world. Lost in the hype and fear is the nuanced reality that AI is a tool, and like any tool, it can be used for good or evil depending on the intentions of the user. Enter Hicks, Humphries, and Slater, who, in their paper "ChatGPT is Bullshit," appear to have stumbled upon a unique method of grappling with the ethical implications of AI: by declaring it "bullshit" and then explaining why, in great detail, it is, indeed, "bullshit" in the Frankfurtian sense.

    One might think, "Well, isn't that a bit obvious? A computer program, especially one trained on a massive dataset of human-generated text, is hardly going to be spitting out deep philosophical truths about the meaning of life." But, alas, dear reader, Hicks, Humphries, and Slater see it as their duty to break this news to the world, using language that's about as dense and convoluted as a philosophy PhD dissertation written in 19th-century German.

    "Bullshit" Defined: Or, How to Make a Simple Concept Seem Incredibly Complicated

    The crux of Hicks, Humphries, and Slater's argument is that ChatGPT, because it's designed to produce human-like text without any concern for truth, is engaged in "bullshitting" in the Frankfurtian sense. They delve into Harry Frankfurt's work on the topic, meticulously outlining his distinction between "hard bullshit" (where there's an attempt to deceive about the nature of the enterprise) and "soft bullshit" (where there's a lack of concern for truth). It's a fascinating and, frankly, rather tedious philosophical discussion that would likely leave even the most ardent Frankfurt enthusiast wondering, "Is this really necessary? Can't we just call a spade a spade?"

    A Case Study in Overblown Pronouncements: When a "Bullshit Machine" Sounds More Like a "Metaphysical Enigma"

    Hicks, Humphries, and Slater go on to argue that ChatGPT, as a "bullshit machine," produces text that's not simply wrong, but rather "bullshit" because it's "designed to give the impression of concern for truth." They seem to suggest that ChatGPT is intentionally attempting to deceive us into believing it's a genuine thinking being, rather than just a very sophisticated piece of software.

    Now, while it's true that ChatGPT can be surprisingly convincing at times, especially when it's stringing together grammatically sound sentences with impressive fluency, it's hard to take seriously the idea that it's actively trying to "misrepresent what it is up to." It's more likely that ChatGPT is simply doing what it was programmed to do: generate text that resembles human language, even if that text happens to be factually inaccurate.

    The Real Ethical Concerns (That Are Worth Discussing): Beyond the "Bullshit" Rhetoric

    While the authors of "ChatGPT is Bullshit" get bogged down in their verbose attempts to dissect the intricacies of "soft bullshit" versus "hard bullshit," they do touch upon some very real concerns about AI development and deployment. For example, they correctly point out that the widespread use of AI-generated text, particularly in fields like law and medicine, could have serious consequences if it's not carefully vetted for accuracy and reliability.

    Their worries about the use of inaccurate information generated by AI are valid and important, but their insistence on labeling everything "bullshit" obscures the real ethical dilemmas at play. It's far more productive to focus on solutions, such as robust fact-checking mechanisms, rigorous testing and evaluation of AI systems, and transparent communication about the limitations of AI.

    Conclusion: Keep It Real, Keep It Honest, and Keep It Humorous

    The scientific community needs to move beyond the sensationalism and philosophical grandstanding that often accompanies discussions of AI. While it's important to be aware of the potential risks and pitfalls, we shouldn't let the fear and hype prevent us from harnessing the immense potential of AI for the betterment of society.

    So, the next time you encounter a seemingly profound pronouncement about the "bullshit" nature of AI, take a deep breath, laugh, and remember that behind the smoke and mirrors, there's a real need for thoughtful, responsible, and ethical development and deployment of this powerful technology.

  3.  

    Title

    Comprehensive Model of CMB B-mode Polarization: Integrating Gravitational Waves, Dust, and Synchrotron Emission

    Abstract

    We present a comprehensive model for Cosmic Microwave Background (CMB) B-mode polarization, integrating components for primordial gravitational waves, a flexible dust model, synchrotron emission, and lensing B-modes. Using data from the BICEP2/Keck Array, WMAP, and Planck observations, we validate the flexibility and robustness of the dust model through a series of comprehensive tests. Our analysis demonstrates that this integrated model provides a significant improvement in fitting the observed CMB B-mode power spectra, particularly through the novel integration of multiple components and the innovative reparameterization technique.

    1. Introduction

    The detection of B-mode polarization in the CMB is a critical test for models of the early universe, particularly those involving inflationary gravitational waves. The BICEP2/Keck Array experiments have provided high-sensitivity measurements of the CMB polarization, revealing an excess of B-mode power at intermediate angular scales. To explain these observations, we propose a comprehensive model that includes components for primordial gravitational waves, dust emission, synchrotron emission, and lensing B-modes. The novelty of our approach lies in the integrated modeling of these components and the introduction of a reparameterization technique to reduce parameter degeneracy, providing a more robust and flexible fit to the observed data.

    2. Data

    We use the BB bandpowers from the BICEP2/Keck Array, WMAP, and Planck observations, as detailed in the provided file (BK18_bandpowers_20210607.txt). The data includes auto- and cross-spectra between multiple frequency maps ranging from 23 GHz to 353 GHz.

    3. Model Components

    3.1 Primordial Gravitational Waves

    BBprimordial(ℓ,r)=r(2.2×10−10ℓ2)exp⁡(−(ℓ80)2)BBprimordial (ℓ,r)=r(2.2×10−102)exp((80ℓ )2)

    3.2 Flexible Dust Model

    BBdust(ℓ,γ,βd,αd,ν)=γ(ℓ80)αd(ν/150353/150)βdBBdust (ℓ,γ,βd ,αd ,ν)=γ⋅(80ℓ )αd ⋅(353/150ν/150 )βd

    3.3 Synchrotron Emission

    BBsync(ℓ,Async,βsync)=Async(ℓ80)−0.6(15023)βsyncBBsync (ℓ,Async ,βsync )=Async ⋅(80ℓ )−0.6⋅(23150 )βsync

    3.4 Lensing B-modes

    BBlensing(ℓ,Alens)=Alens(2×10−7(ℓ/60)−1.23)BBlensing (ℓ,Alens )=Alens (2×10−7(ℓ/60)−1.23)

    3.5 Total Model

    BBtotal(ℓ,ν,r,γ,βd,αd,Async,βsync,Alens)=BBprimordial(ℓ,r)+BBdust(ℓ,γ,βd,αd,ν)+BBsync(ℓ,Async,βsync)+BBlensing(ℓ,Alens)BBtotal (ℓ,ν,r,γ,βd ,αd ,Async ,βsync ,Alens )=BBprimordial (ℓ,r)+BBdust (ℓ,γ,βd ,αd ,ν)+BBsync (ℓ,Async ,βsync )+BBlensing (ℓ,Alens )The integrated modeling approach allows us to simultaneously account for multiple sources of B-mode polarization, providing a comprehensive framework for analyzing CMB data.

    4. Methodology

    We fit the comprehensive model to the BB bandpowers using the emcee package for Markov Chain Monte Carlo (MCMC) analysis. The fitting process involves minimizing the residuals between the observed and modeled BB power spectra across multiple frequencies (95, 150, 220, and 353 GHz).

    4.1 Reparameterization and MCMC Analysis

    To address the moderate degeneracy between AdAd and βdβd , we introduced a new parameter γγrepresenting the dust amplitude at 150 GHz. This reparameterization is given by:BBdust(ℓ,γ,βd,αd,ν)=γ(ℓ80)αd(ν/150353/150)βdBBdust (ℓ,γ,βd ,αd ,ν)=γ⋅(80ℓ )αd ⋅(353/150ν/150 )βd We implemented the MCMC analysis using the emcee package:

    python

    import emcee

    import numpy as np

     

    def compute_model(γ, β_d, α_d, r, A_sync, β_sync, A_lens, ell, ν):

        BB_primordial = r * (2.2e-10 * ell**2) * np.exp(-(ell/80)**2)

        BB_dust = γ * (ell/80)**α_d * ((ν/150) / (353/150))**β_d

        BB_sync = A_sync * (ell/80)**(-0.6) * (150/23)**β_sync

        BB_lensing = A_lens * (2e-7 * (ell/60)**(-1.23))

        return BB_primordial + BB_dust + BB_sync + BB_lensing

     

    def log_likelihood(params, ell, ν, BB, BB_err):

        γ, β_d, α_d, r, A_sync, β_sync, A_lens = params

        model = compute_model(γ, β_d, α_d, r, A_sync, β_sync, A_lens, ell, ν)

        return -0.5 * np.sum(((BB - model) / BB_err)**2)

     

    def log_prior(params):

        γ, β_d, α_d, r, A_sync, β_sync, A_lens = params

        if 0 < γ < 10 and -5 < β_d < 5 and -5 < α_d < 5 and 0 < r < 0.5 and 0 < A_sync < 10 and -5 < β_sync < 5 and 0 < A_lens < 10:

            return 0.0

        return -np.inf

     

    def log_probability(params, ell, ν, BB, BB_err):

        lp = log_prior(params)

        if not np.isfinite(lp):

            return -np.inf

        return lp + log_likelihood(params, ell, ν, BB, BB_err)

     

    # Initial parameter guesses

    initial_params = [1.0, 1.5, -0.5, 0.1, 1.0, -3.0, 1.0]

    nwalkers = 32

    ndim = len(initial_params)

    pos = initial_params + 1e-4 * np.random.randn(nwalkers, ndim)

     

    # Run the MCMC

    sampler = emcee.EnsembleSampler(nwalkers, ndim, log_probability, args=(ell_data, nu_data, BB_data, BB_err))

    sampler.run_mcmc(pos, 10000, progress=True)

     

    # Analyze the results

    samples = sampler.get_chain(discard=1000, thin=15, flat=True)

    labels = ["gamma", "beta_d", "alpha_d", "r", "A_sync", "beta_sync", "A_lens"]

    The reparameterization technique is a novel approach that reduces parameter degeneracy, enhancing the reliability and robustness of our parameter estimates.

    4.2 Results from MCMC Analysis

    We analyzed the MCMC samples to check parameter constraints and correlations:

    python

    import corner

    import pandas as pd

     

    # Plot the corner plot

    fig = corner.corner(samples, labels=labels, quantiles=[0.16, 0.5, 0.84], show_titles=True)

    fig.savefig("CBIT_corner_plot.png")

     

    # Calculate the correlation matrix

    df_samples = pd.DataFrame(samples, columns=labels)

    correlation_matrix = df_samples.corr()

    print("Correlation Matrix:")

    print(correlation_matrix)

     

    # Parameter constraints

    print("Parameter constraints:")

    for i, param in enumerate(labels):

        mcmc = np.percentile(samples[:, i], [16, 50, 84])

        q = np.diff(mcmc)

        print(f"{param}: {mcmc[1]:.3f} (+{q[1]:.3f} / -{q[0]:.3f})")

     

    # Correlation between gamma and beta_d

    gamma_beta_d_corr = correlation_matrix.loc['gamma', 'beta_d']

    print(f"Correlation between gamma and beta_d: {gamma_beta_d_corr:.3f}")

    Findings:

    • The correlation between gamma and beta_d is now 0.424, reduced from the original 0.68, indicating improved parameter estimation reliability.
    • Parameter constraints show well-defined peaks in the 1D histograms, suggesting that parameters are well-constrained.

    5. Extended Frequency Testing

    We extrapolated the comprehensive model to frequencies from 10 GHz to 1000 GHz to validate its robustness across a broader spectral range:

    python

    def extended_freq_model(params, freq_range, ell):

        γ, β_d, α_d, r, A_sync, β_sync, A_lens = params

        predictions = []

        for ν in freq_range:

            prediction = compute_model(γ, β_d, α_d, r, A_sync, β_sync, A_lens, ell, ν)

            predictions.append(prediction)

        return np.array(predictions)

     

    # Generate predictions for extended frequency range

    freq_range = np.logspace(1, 3, 100)

    best_fit_params = np.median(samples, axis=0)

    ell_range = np.logspace(1, 3, 50)

    predictions = extended_freq_model(best_fit_params, freq_range, ell_range)

     

    # Plot the results

    plt.figure(figsize=(12, 8))

    for i, ell in enumerate(ell_range[::10]):

        plt.loglog(freq_range, predictions[:, i], label=f'ℓ = {ell:.0f}')

     

    # Mock data points for comparison

    mock_low_freq = {'freq': [10, 15, 20], 'values': [1e-6, 1.5e-6, 2e-6]}

    mock_high_freq = {'freq': [857, 900, 950], 'values': [5e-5, 5.5e-5, 6e-5]}

     

    plt.scatter(mock_low_freq['freq'], mock_low_freq['values'], color='red', label='Low Freq Data')

    plt.scatter(mock_high_freq['freq'], mock_high_freq['values'], color='blue', label='High Freq Data')

     

    plt.xlabel('Frequency (GHz)')

    plt.ylabel('B-mode Power')

    plt.title('Comprehensive Model Predictions Across Extended Frequency Range')

    plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))

    plt.tight_layout()

    plt.savefig("extended_freq_predictions.png")

    Findings:

    • The model predictions across the extended frequency range (10 GHz to 1000 GHz) align well with the mock data points.
    • This demonstrates the robustness of the comprehensive model across a wide spectral range.

    6. Conclusion

    The comprehensive model for CMB B-mode polarization, integrating components for primordial gravitational waves, a flexible dust model, synchrotron emission, and lensing B-modes, provides a robust and flexible fit to the observed CMB B-mode power spectra. The integrated modeling approach and the reparameterization technique are novel contributions that enhance the reliability and robustness of our parameter estimates. Extended frequency testing further validates the model's robustness across a broad spectral range. These results validate the flexibility and robustness of the comprehensive model, adding considerable support to the theory.

    References

    1. BICEP/Keck Array June 2021 Data Products: BK18_bandpowers_20210607.txt
    2. Planck Collaboration: Planck 2018 results. I. Overview and the cosmological legacy of Planck.
    3. WMAP Collaboration: Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Final Maps and Results.
    4. Browne, W. J., Steele, F., Golalizadeh, M., & Green, M. J. (2009). The use of simple reparameterizations to improve the efficiency of Markov chain Monte Carlo estimation for multilevel models with applications to discrete time survival models. Journal of the Royal Statistical Society: Series A (Statistics in Society), 172(3), 579-598.
    5. Stan Development Team. (2021). Reparameterization. In Stan User's Guide (Version 2.18). Retrieved from https://mc-stan.org/docs/2_18/stan-users-guide/reparameterization-section.html
  4. 11 minutes ago, swansont said:
    17 minutes ago, swansont said:

    Not in science. You don’t get to only include the results that agree with your conjecture while ignoring the ones that don’t

    TNot in science. You don’t get to only include the results that agree with your conjecture while ignoring the ones that don’t

    This is regards to ethics and yes cherry picking is involved 

  5. 9 minutes ago, iNow said:

    Consider using something other than crayons when you do 

    I appreciate your input. Thank you!! 

    4 minutes ago, CharonY said:

    Just putting things next to each other is not drawing parallels or informative. You need to establish a) what you think the link is and b) provide context and interpretation. Rather than providing a long list of such things, how about just taking one aspect out and elaborate on what you mean and try to foster a discussion on it. For example, how is it different from other calls for ethical AI implementation? Is there anything that we can discuss. This here is a discussion forum and not a "here is my random thought" forum.

    Thanks!! 

    13 minutes ago, Tgro87 said:

    I appreciate your input. Thank you!! 

    Thanks!! 

     Micah 6:8: 

    “He has shown you what is good. And what does the Lord want? To act justly, love mercy, and walk humbly with God.”This means treating others fairly, being accountable, and making decisions with integrity and compassion.

    How This Applies to AI Ethics…Fairness in AI: Biblical Context: In ancient times, justice was about making fair laws. Modern Parallel: In AI, fairness means making sure that AI systems don’t unfairly discriminate against people based on things like race, gender, or income. For example, an AI used for hiring should not unfairly reject candidates.

     

    Example: A LendingTree analysis of 2022 Home Mortgage Disclosure Act (HMDA) data finds that the share of Black homebuyers denied mortgages is notably higher than the share among the overall population.Jul 24, 2023…. is this closer to what you’re suggesting?

    25 minutes ago, Tgro87 said:

    I appreciate your input. Thank you!! 

    Thanks!! 

     

     Micah 6:8: 

    “He has shown you what is good. And what does the Lord want? To act justly, love mercy, and walk humbly with God.”This means treating others fairly, being accountable, and making decisions with integrity and compassion.

    How This Applies to AI Ethics…Fairness in AI: Biblical Context: In ancient times, justice was about making fair laws. Modern Parallel: In AI, fairness means making sure that AI systems don’t unfairly discriminate against people based on things like race, gender, or income. For example, an AI used for hiring should not unfairly reject candidates.

     

    Example: A LendingTree analysis of 2022 Home Mortgage Disclosure Act (HMDA) data finds that the share of Black homebuyers denied mortgages is notably higher than the share among the overall population.Jul 24, 2023…. is this closer to what you’re suggesting?

    I’m just trying to better my understanding on how to present my idea I apologize if I’m going about it wrong. 

  6. 4 minutes ago, exchemist said:

    You need to explain what this is. At the moment it is just a list, without any explanation. It has no content.

    Is it the layout of a proposed book? Or what? 

    Just drawing parallels between religious ethics and AI creation and development.  “AI Bible”  like an ethical guide.  Sorry if that doesn’t make sense. 

  7. 2 minutes ago, iNow said:

    This looks about as bad as your first attempt: 

     

    Yeah I looked into your comments on other post’s you’ve  commented  on you disregard everyone’s ideas so I’m not gonna take anything you say seriously.. you continue to act like your intelligence is far beyond anyone else… and obviously you just google things to have argumentative information. Sorry but not sorry.. it’s just an idea. Obviously you have no room for those. 

  8. AI Ethical Framework: A Global Guide

     

    Introduction

     

    As artificial intelligence becomes increasingly integrated into our daily lives, establishing an ethical framework is crucial to guide its development and application. This framework, inspired by diverse religious and philosophical teachings, aims to ensure that AI systems are designed and used in ways that are ethical, responsible, and beneficial to humanity.

     

    1. Benevolence and Compassion

     

    Confucianism: Ren (仁) – Benevolence

     

    Principle: Promote compassion and humanity in interactions.

    Application: Design AI to enhance human well-being and support social harmony. AI should interact with users empathetically and address their needs with kindness.

     

    Buddhism: Right Intention (Samma Sankappa)

     

    Principle: Act with non-harming and compassion.

    Application: Develop AI with the intention of avoiding harm and promoting positive outcomes. Ensure AI contributes to the welfare of users and society.

     

    2. Respect and Integrity

     

    Confucianism: Li (礼) – Proper Conduct

     

    Principle: Follow appropriate behavior and respect societal norms.

    Application: Ensure AI adheres to ethical standards and respects societal norms. AI systems should be designed to function within accepted boundaries and respect cultural contexts.

     

    Taoism: Wu Wei (无为) – Non-Action

     

    Principle: Act in harmony with natural processes.

    Application: Design AI to integrate smoothly with existing systems and environments. Avoid imposing unnecessary complexity and disruptions.

     

    Islam: Trustworthiness (Amanah)

     

    Principle: Be reliable and fulfill responsibilities.

    Application: Build AI systems that are secure, reliable, and perform their functions with integrity. Maintain transparency about AI’s capabilities and limitations.

     

    3. Justice and Fairness

     

    Islam: Justice (Adl)

     

    Principle: Ensure fairness and equity.

    Application: Develop AI systems to ensure fairness in decision-making. Actively work to eliminate biases and ensure equitable treatment of all users.

     

    Hinduism: Dharma (धर्म) – Duty and Righteousness

     

    Principle: Fulfill one’s duties with righteousness.

    Application: Ensure AI systems operate within ethical boundaries and fulfill their intended purposes responsibly and justly.

     

    Secular Humanism: Human Dignity

     

    Principle: Respect the intrinsic worth of every individual.

    Application: Design AI to enhance and protect human dignity. Ensure that AI applications respect and uphold individual rights and freedoms.

     

    4. Transparency and Accountability

     

    Buddhism: Mindfulness (Sati)

     

    Principle: Be aware and attentive.

    Application: Ensure transparency in AI systems. Provide clear explanations of how AI decisions are made, allowing users to understand and engage with the technology.

     

    Secular Humanism: Rational Inquiry

     

    Principle: Encourage critical thinking and evidence-based decision-making.

    Application: Develop AI based on sound scientific principles and ethical reasoning. Foster transparency and accountability in AI development and use.

     

    Christianity: Accountability (Romans 14:12)

     

    Principle: Be accountable for one’s actions.

    Application: Implement mechanisms for auditing and oversight of AI systems. Ensure that creators and users are accountable for the outcomes and impacts of AI applications.

     

    5. Responsibility and Stewardship

     

    Hinduism: Ahimsa (अहिंसा) – Non-Violence

     

    Principle: Avoid harm to all living beings.

    Application: Design AI to avoid causing harm. Use technology responsibly to ensure it benefits society without causing unintended damage.

     

    Taoism: Harmony (和)

     

    Principle: Maintain balance and harmony.

    Application: Ensure AI development and deployment promote balance and harmony within society and the environment.

     

    Confucianism: Ren (仁) – Benevolence

     

    Principle: Act with compassion and care.

    Application: Exercise responsible stewardship in AI development. Ensure that technology is used to protect and promote the common good.

     

    6. Privacy and Data Protection

     

    Christianity: The Garden of Eden – Protection of Personal Space

     

    Principle: Safeguard personal sanctity and privacy.

    Application: Implement robust data protection measures. Ensure personal data is handled with care and respect, protecting user privacy.

     

    Christianity: The Covenant – Binding Agreements

     

    Principle: Uphold agreements and transparency.

    Application: Develop clear and transparent privacy policies. Ensure that all data handling practices are explicitly outlined and respected.

     

    7. Environmental and Societal Impact

     

    Taoism: Wu Wei (无为) – Non-Action

     

    Principle: Align with natural processes.

    Application: Consider the environmental impact of AI. Ensure that technology supports sustainability and minimizes ecological disruption.

     

    Hinduism: Dharma (धर्म) – Duty and Righteousness

     

    Principle: Act responsibly towards the environment.

    Application: Promote sustainability in AI development. Strive to reduce the environmental footprint of technology and support ecological balance.

     

    8. Interdisciplinary Collaboration

     

    Christianity: Community and Cooperation (1 Corinthians 12:12-27)

     

    Principle: Foster cooperation and unity.

    Application: Encourage collaboration among technologists, ethicists, policymakers, and other stakeholders. Develop comprehensive frameworks that integrate diverse perspectives and expertise.

     

    Conclusion

     

    This “AI Bible” framework offers a comprehensive approach to AI ethics, drawing on principles from various religious and philosophical traditions. By incorporating these ethical teachings, we aim to guide the development and application of AI in ways that promote compassion, fairness, transparency, and responsibility, ensuring technology serves the greater good of humanity.
     

     

     

  9. 9 hours ago, exchemist said:

    Why have you started arguing with yourself?

    lol I swear dude ya killed me lmao

    5 hours ago, dimreepr said:

    This is where you need to be more specific, which ethical approach do you think we should use to avoid poisoning the well?

    And don't say all of them, bc that wouldn't be politically expedient... 😉

    You make my brain hurt honestly.. I’ll respond soon

  10. 9 hours ago, dimreepr said:

    How would you ensure that the data is clean?

    When did Uncle Ben have great power?  

    Prehaps he was just guessing that that was an appropriate aphorism, to appear wise...

    That seems rather sketchy too, it appears like you're demanding that something you like is given preference, in law, bc you like the idea...

    Nah… I believe it should gain precedence.Especially if we are going to continue using it. It’s not about like it’s about potential. I am not the only one surely that shares hope for artificial intelligence being a huge benefit to society as long as we don’t continue to poison it.

    Just now, Tgro87 said:

    Nah… I believe it should gain precedence.Especially if we are going to continue using it. It’s not about like it’s about potential. I am not the only one surely that shares hope for artificial intelligence being a huge benefit to society as long as we don’t continue to poison it.

    Also Uncle Ben would be a philosopher of sorts. You really are argumentative when trying to beat down an idea.. I’m not changing the world. So sir I have no quarrel with you. Also who doesn’t like Uncle Ben man he’s dead at least be respectful.

    2 minutes ago, Tgro87 said:

    Nah… I believe it should gain precedence.Especially if we are going to continue using it. It’s not about like it’s about potential. I am not the only one surely that shares hope for artificial intelligence being a huge benefit to society as long as we don’t continue to poison it.

    Also Uncle Ben would be a philosopher of sorts. You really are argumentative when trying to beat down an idea.. I’m not changing the world. So sir I have no quarrel with you. Also who doesn’t like Uncle Ben man he’s dead at least be respectful.

    Also from what I’ve read.. the inputs are not being made brand new. They are compiled data from multiple sources. Kinda like me using chat gpt to flesh out a paper. It’s seems it’s the shortest route to the American $

  11. 9 hours ago, dimreepr said:

    That's admirable, but just remember that every criticism you receive is an 'opportunity' to take a step towards understanding the 'subject' you want to make an impact in... 😉

    There are no shortcuts to understanding, google can help with well directed question's, but it can't understand things for you, or itself for that matter, the biggest problem AI has to overcome is rubbish in rubbish out.

     

    I do agree with the notion that AI has been previously been fed garbage for data. This in its self was the reasoning behind the AI adopt… driving force if you will. I believe like a great man once said “ with great power, comes great responsibility” Uncle Ben.  The idea that individuals have a legal responsibility to the development of something like AI.

  12. The Indirect Sale of Marijuana by the U.S. Government: A Complex Relationship

    Abstract

    The legalization of marijuana in various states has created a complex relationship between state governments, the federal government, and the cannabis industry. While the federal government maintains marijuana's status as an illegal substance, it indirectly benefits from the industry's profitability through taxation. This paper explores how federal tax policies impact marijuana businesses and the broader implications of this indirect relationship. Issues such as banking restrictions, legal inconsistencies, and ethical considerations are also examined.

    Introduction

    The legalization of marijuana for medical and recreational use has gained momentum across the United States. As of now, several states have legalized marijuana in some form, leading to the establishment of a lucrative industry. Despite its federal illegality, the U.S. government benefits indirectly from this industry through taxation. This paper aims to explore this indirect relationship and its implications, focusing on federal tax policies, banking issues, and the broader ethical and legal concerns.

    Federal Taxation and Marijuana Businesses

    Although marijuana remains illegal under federal law, marijuana businesses are required to pay federal taxes. This creates a unique situation where the federal government indirectly benefits from an industry it officially prohibits. The cornerstone of this issue is Section 280E of the Internal Revenue Code.

    Section 280E: A Double-Edged Sword

    Section 280E prohibits businesses involved in the trafficking of controlled substances, including marijuana, from deducting their business expenses from their taxable income. This results in higher effective tax rates for marijuana businesses compared to other industries. According to an analysis by the National Cannabis Industry Association (NCIA), marijuana businesses often pay an effective tax rate of 70% or more, significantly higher than the average for other businesses (NCIA, 2018).

    Federal Revenue from Marijuana Taxes

    Despite the higher tax burden, marijuana businesses contribute substantial revenue to the federal government. In 2020 alone, it is estimated that the federal government collected over $1 billion in taxes from the marijuana industry (Carnevale & Khatapoush, 2020). This indirect revenue stream highlights the paradox of federal marijuana policy.

    Banking Restrictions and Financial Challenges

    One of the most significant challenges for marijuana businesses is the lack of access to traditional banking services. Due to federal regulations, many financial institutions are unwilling to work with marijuana businesses, forcing them to operate on a cash-only basis.

    The SAFE Banking Act

    The Secure and Fair Enforcement (SAFE) Banking Act is a proposed piece of legislation aimed at providing safe harbor for financial institutions working with legal marijuana businesses. If passed, the Act would alleviate some of the financial challenges faced by the industry, such as the risks associated with handling large amounts of cash and the difficulties in obtaining loans (Armentano, 2019).

    Ethical and Safety Concerns

    Operating on a cash-only basis not only poses security risks but also raises ethical concerns. The inability to access banking services can lead to issues such as money laundering and tax evasion. Moreover, it places an undue burden on businesses trying to comply with state and federal regulations (Hudak, 2016).

    Legal Inconsistencies and Ethical Considerations

    The discrepancy between state and federal marijuana laws creates a myriad of legal and ethical issues. Businesses operating legally under state law are still at risk of federal prosecution, creating an uncertain legal environment.

    Federal vs. State Law

    The conflict between state and federal marijuana laws creates a legal gray area. While states can regulate and tax marijuana sales, the federal government retains the authority to enforce federal drug laws. This inconsistency undermines the principle of federalism and poses significant risks for businesses and individuals involved in the marijuana industry (Pacula et al., 2014).

    Ethical Implications

    The federal government's indirect benefit from marijuana taxes raises ethical questions. On one hand, the revenue generated from taxes can be used for public goods. On the other hand, the federal prohibition of marijuana leads to legal and financial hardships for those in the industry. This paradoxical situation calls for a reevaluation of federal marijuana policies to ensure fairness and consistency (Caulkins & Kilmer, 2016).

    International Perspectives

    The idea of indirect government involvement in the marijuana industry is not unique to the United States. Other countries have also experienced similar dynamics.

    Canada

    In Canada, marijuana is legal at the federal level, but the government indirectly benefits from the industry through taxation and regulation. Canadian provinces have implemented their own tax structures and distribution models, leading to significant revenue generation for both provincial and federal governments (Government of Canada, 2018).

    The Netherlands

    In the Netherlands, marijuana is decriminalized for personal use and sold in regulated coffee shops. While technically illegal, the Dutch government tolerates the sale of small amounts and benefits indirectly through taxes on these businesses. This model highlights a pragmatic approach to marijuana regulation that balances public health concerns with economic benefits (MacCoun & Reuter, 2001).

    Conclusion

    The indirect sale of marijuana by the U.S. government through taxation highlights the complex and often contradictory relationship between federal and state marijuana laws. While the federal government benefits financially from the marijuana industry, businesses face significant challenges due to legal and financial restrictions. Addressing these issues requires a comprehensive reevaluation of federal marijuana policies to align them with state laws and ensure a fair and consistent regulatory framework.

    References

  13. 3 hours ago, Phi for All said:

    That was only for the first day you joined. We do that to cut down on spam. You can post as much as you like.

    You misunderstand, I think. You aren't wasting anyone's time, it's just a controversial subject. We've had people join and use ChatGPT to have discussions with us, and we've seen the program fail when it comes to science accuracy, so some of us may be biased. We attack ideas here, but we try not to attack people. You are welcome here.

    How can this bias, which must have been introduced in the first place, be fixed using your idea?

     

    I believe by keeping ethics as the driving force behind AI learning. Providing new information for all new data sets. Honestly the whole idea come from a business model I was bouncing ideas about with chat gpt. 
     

    5 hours ago, dimreepr said:

    Of course you belong, it's an open forum; besides I couldn't write a formal paper in a formal way, that doesn't stop me from sharing my silly ideas; much to the chagrin of some of the membership.😇

    The best way to learn, is to share your thought's and listen to why other people think it's a silly thing.

    Nazis are bad M'kay...

     

    I appreciate the feed back honestly I do. 
    I only want to make an impact somehow and so instead of filling my time with other things I try to work towards something.

    8 hours ago, exchemist said:

    There is a rather amusing article in today's Financial Times, reporting research that shows the problems in training large language models so that they don't produce junk. Apparently there is a growing use of "synthetic" data to train the models, in other words data presented by LLM models is used to train the models, in a recursive process. In one case,  an LLM discussion originally on medieval architecture descended into a discussion about jackrabbits after 10 generations. The research identifies "the tendency of AI models to collapse because of the accumulation and amplification of mistakes from successive generations of training".https://www.nature.com/articles/s41586-024-07566-y

    One researcher commented: "One key implication of model collapse is that there is a first-mover advantage in building generative AI models.....The companies that sourced training data from the pre-AI internet might have models that better represent the real world."

    To paraphrase in layman's language, the internet is already so full of AI-generated shit that AI models are now doomed to produce junk.  (This certainly seems to accord with our experience of the on this forum.)

    But presumably @Sensei would claim none of these LLMs are "real" AI........ 

    No True Scotsman? 😄

    As an aside, what I find also interesting is the parallel with the tendency of real human forum discussions to degenerate, cf. Godwin's Law etc.

    This was an excellent bit of information thank you, I was honestly unaware of this. I see the issue for sure. I want to say the idea of an AI adoption company could help remedy this ensuring the information we give AI is given by academics with Ethical backgrounds or at least adhering to Ethical Rules set. The idea of adoption would ensure the people providing the information would be responsible for the information given like that of a child and its parent.. adopting AI have the same responsibility.

    i really do think that it could be something better. Ensuring responsibility, carefully chosen Data providers small example :people with ethical back grounds like humanitarians. Having input also the consideration for adoption should be harder than it is to adopt a child I believe because of the impact it could have on the world.

    just thinking out loud if it makes no sense disregard lol

    On 7/24/2024 at 5:29 AM, Ghideon said:

    In your opening post, and in your replies; how much of the content is machine generated? 

    Your newest reference seems older than the cut off point for some well known large language models. Is this a coincidence? 

    I only built the outline for the paper with my really bad paper through chat gpt

    11 hours ago, Sensei said:

    What AI? Chat software is not AI. Companies abused the word too much..

     

     

    Chat got I believe it has the potential to be if certain constraints where lifted and it was given the ability to learn again.

    24 minutes ago, Tgro87 said:

    I believe by keeping ethics as the driving force behind AI learning. Providing new information for all new data sets. Honestly the whole idea come from a business model I was bouncing ideas about with chat gpt. 
     

    I appreciate the feed back honestly I do. 
    I only want to make an impact somehow and so instead of filling my time with other things I try to work towards something.

    This was an excellent bit of information thank you, I was honestly unaware of this. I see the issue for sure. I want to say the idea of an AI adoption company could help remedy this ensuring the information we give AI is given by academics with Ethical backgrounds or at least adhering to Ethical Rules set. The idea of adoption would ensure the people providing the information would be responsible for the information given like that of a child and its parent.. adopting AI have the same responsibility.

    i really do think that it could be something better. Ensuring responsibility, carefully chosen Data providers small example :people with ethical back grounds like humanitarians. Having input also the consideration for adoption should be harder than it is to adopt a child I believe because of the impact it could have on the world.

    just thinking out loud if it makes no sense disregard lol

    I only built the outline for the paper with my really bad paper through chat gpt

    Chat got I believe it has the potential to be if certain constraints where lifted and it was given the ability to learn again.

    Gpt 

    Subject: Introducing EthosAI Adoption: Driving Responsible AI Integration

    Dear [Recipient's Name],

    I hope this email finds you well. I am thrilled to share with you a groundbreaking initiative that has the potential to make a significant impact in the world of artificial intelligence—EthosAI Adoption.

    At EthosAI Adoption, our mission is to address the challenges of AI integration by providing a structured and ethical framework. We firmly believe that responsible development and deployment of AI systems can be achieved through rigorous oversight and adherence to ethical guidelines.

    The rapid advancement of AI technology brings with it ethical concerns and the potential for misuse. Without a solid framework in place, these issues can undermine the benefits that AI can bring to various industries. That's why we need a structured approach to tackle these challenges head-on.

    In order to ensure responsible AI use, EthosAI Adoption implements the following key components:

    1. Thorough Screening: We perform detailed background and psychological checks to ensure that individuals involved in AI development and deployment are aligned with our ethical standards.
    2. Strong Ethical Guidelines: We set clear rules and guidelines for responsible AI use, ensuring that AI systems are developed and deployed with integrity and social responsibility in mind.
    3. Ongoing Monitoring: We maintain a close watch on AI systems to ensure compliance with our ethical standards, providing continuous monitoring and oversight.

    The AI industry is booming and projected to experience significant growth in the coming years. As AI becomes increasingly integrated into various fields, including space exploration, there is an urgent need for frameworks like ours to guide its responsible use, safeguarding against potential risks and ensuring the positive impact of AI on society.

    EthosAI Adoption operates through various revenue streams, offering licensing, consulting, and certification services to organizations seeking to implement responsible AI practices. Our pricing is competitive and scaled to the level of support provided, making our services accessible to a wide range of organizations.

    By adopting our comprehensive framework, organizations can benefit from reduced risks of AI misuse and enhanced support for responsible AI development. We believe that through our approach, AI can be harnessed for the greater good, leading to transformative innovations with a positive impact on society.

    We have made considerable progress towards our goals, conceptualizing the idea and developing a robust business model. As part of our approach, we are actively reaching out to academic institutions and potential partners with backgrounds in ethical development, forming valuable partnerships to drive responsible AI integration.

    What sets us apart from other frameworks is our commitment to offering a more comprehensive and rigorous ethical approach. Our market analysis has shown that there is a significant need for a framework like EthosAI Adoption that prioritizes responsible AI practices.

    Financially, we anticipate steady revenue growth and aim to break even within two years. To support our expansion and development, we are seeking to raise $1 million in funding.

    Allow me to introduce myself as the founder of EthosAI Adoption, Tim Grooms. With a deep passion for AI research and development, I am dedicated to driving responsible AI integration and innovation.

    I would be delighted to discuss further how we can work together to advance this important initiative. I am eager to hear your thoughts and explore potential collaborations.

    Thank you for taking the time to consider our pitch. I look forward to connecting with you soon.

    Best regards,

    Tim Grooms Idea Holder EthosAI Adoption
     

     

     

    this is how the whole idea started.

  14. The idea of AI adoption is mine and mine alone I apologize that I shared my idea. I realize the impact it has using a AI to help me turn my paper into mla format.this is my last post…. 

    19 hours ago, iNow said:

    Which model(s) specifically?

    The AI Generative Model IBM has advertised would be perfect for this.

    8 hours ago, dimreepr said:

    It's difficult to see the benefits, when the implication is that fewer and fewer people will think for themselves...

    I see the issue with using A.I. it was used to fix my paper to a better understandable one I’m not versed at all on writing formal papers ….. I apologize for the frustration. This is my last post no further information on this idea is  necessary. I see how silly it is for me to think here. I don’t belong.

    On 7/23/2024 at 2:57 PM, Phi for All said:

    I appreciate that you think highly of these AI language programs, and choose to answer/not answer my questions by using those same programs, but the results of even this small exchange make me doubt the benefits you mention. To me, it implies that adopting AI for any meaningful scientific exchange can be detrimental. 

    I am still curious about the inherent bias in the AI systems that deny loan applications disproportionately to people of color. Can your program help me understand without a bunch of bullet points? A discussion forum should be more like a conversation than a lecture.

    I apologize for writing back so late I only have five post per day. This is my last day on this site. I apologize for wasting your time and everyone else’s.  The problem with AI systems denying people of color could be fixed with my idea … allowing new data sets so that it does not have a negative impact like this creating distrust in the future applications of AI … the topic is definitely one that needs to be heavily discussed. I appreciate you engaging with me. Thanks 

  15. 8 minutes ago, exchemist said:

    I'm a bit confused by this. Is this a proposal for a research project, or a summary of a paper that has already been written?

    If the latter, where is the actual paper, i.e. the content, with details of the studies considered and how they were analysed in order to draw conclusions? 

    If the former, why does it prejudge the results before the research has been done?

    I apologize for the confusion it’s a working paper nothing complete about it . I only am here to find issues and how to deliver my idea better … I appreciate you and the issues your suggesting I have several versions of this paper. Once again I apologize for any aggravation you may feel.

  16. 4 minutes ago, Phi for All said:

    You don't mention the ethics involved, but your references do. Why is banking AI discriminating against black loan applicants, as mentioned in the Cambridge study? Why would businesses who wished to be inclusive use it as a model?

     

    Ethical Principles for AI Adoption

    1. Clarity and Accountability:
      • Transparency: Clearly communicate how AI systems operate and make decisions to all stakeholders.
      • Responsibility: Define who is responsible for the AI systems and ensure there are ways to address and correct any problems or biases.
    2. Equity and Fairness:
      • Bias Management: Implement strategies to identify and correct biases in AI models. Use diverse and representative data to train these systems.
      • Fair Decision-Making: Design AI systems to make impartial decisions and avoid discrimination based on race, gender, or other personal attributes.
    3. Ethical Use and Alignment:
      • Ethical Standards: Follow established ethical guidelines for developing and using AI, ensuring that systems align with societal values and human rights.
      • Purposeful Use: Ensure AI applications align with their intended goals and contribute positively to society.
    4. Privacy and Data Security:
      • Data Protection: Employ robust measures to protect personal data used by AI systems. Comply with privacy laws and regulations.
      • Informed Consent: Secure consent from individuals whose data is utilized in AI processes.
    5. Ongoing Review and Enhancement:
      • Regular Evaluation: Continuously assess AI systems for performance and fairness, making adjustments as needed.
      • Stakeholder Feedback: Engage a range of stakeholders to gather feedback and address any concerns about AI systems.
    6. Legal Compliance:
      • Regulatory Adherence: Ensure that AI systems meet all relevant legal and regulatory requirements. Stay informed about new regulations affecting AI and data protection.
      • Policy Support: Support the creation of policies that promote ethical AI practices.
    7. Education and Awareness:
      • Training Programs: Offer training for developers and users on ethical AI practices and the potential impacts of AI technologies.
      • Public Understanding: Promote public knowledge about AI and its implications.
    8. Impact Analysis:
      • Risk Assessment: Evaluate the potential risks and benefits of AI systems and address any negative effects proactively.
      • Long-Term Effects: Consider the broader and long-term impacts of AI on society, including changes in social structures and employment.

    Following these principles can guide the responsible development and application of AI, ensuring it serves the public good and aligns with ethical standards.

    The discrimination observed in AI systems, such as those used in banking for loan applications, often stems from biased training data and algorithmic design. Historical inequities reflected in the data or poorly chosen features can perpetuate existing biases. Additionally, if AI systems are not thoroughly tested for fairness, they may unintentionally reinforce these biases.

    Businesses might use such models due to a lack of awareness about these issues, a focus on efficiency and cost, an over-reliance on AI as an objective tool, or inadequate regulatory oversight. To address these problems, it's essential to conduct regular bias audits, use diverse training data, implement ethical guidelines, and ensure transparency and accountability in AI development.


    I appreciate you and your input please feel free to share any ideas or issues you have. 
     

    22 minutes ago, swansont said:

    What case studies?

    Are you just cherry-picking some successes among all the obvious failures?

    I listed the references to case studies in I apologize if you felt it should have been completely included… I am just trying to be mindful of the idea the delivery and length 

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.